docs: sync with code (topologies, mount scheme, CLI flags, UEFI/BIOS, fstab) and fix relative src links in docs/ to ../src/
This commit is contained in:
127
docs/SPECS.md
127
docs/SPECS.md
@@ -3,31 +3,31 @@
|
||||
This document finalizes core specifications required before code skeleton implementation. It complements [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) and [docs/SCHEMA.md](docs/SCHEMA.md), and references the API declarations listed in [docs/API.md](docs/API.md).
|
||||
|
||||
Linked modules and functions
|
||||
- Logging module: [src/logging/mod.rs](src/logging/mod.rs)
|
||||
- [fn init_logging(opts: &LogOptions) -> Result<()>](src/logging/mod.rs:1)
|
||||
- Report module: [src/report/state.rs](src/report/state.rs)
|
||||
- [const REPORT_VERSION: &str](src/report/state.rs:1)
|
||||
- [fn build_report(...) -> StateReport](src/report/state.rs:1)
|
||||
- [fn write_report(report: &StateReport) -> Result<()>](src/report/state.rs:1)
|
||||
- Device module: [src/device/discovery.rs](src/device/discovery.rs)
|
||||
- [fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>>](src/device/discovery.rs:1)
|
||||
- Partitioning module: [src/partition/plan.rs](src/partition/plan.rs)
|
||||
- [fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan>](src/partition/plan.rs:1)
|
||||
- [fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>>](src/partition/plan.rs:1)
|
||||
- Filesystems module: [src/fs/plan.rs](src/fs/plan.rs)
|
||||
- [fn plan_filesystems(disks: &[Disk], parts: &[PartitionResult], cfg: &Config) -> Result<FsPlan>](src/fs/plan.rs:1)
|
||||
- [fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>>](src/fs/plan.rs:1)
|
||||
- Mount module: [src/mount/ops.rs](src/mount/ops.rs)
|
||||
- [fn plan_mounts(fs_results: &[FsResult], cfg: &Config) -> Result<MountPlan>](src/mount/ops.rs:1)
|
||||
- [fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>>](src/mount/ops.rs:1)
|
||||
- [fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()>](src/mount/ops.rs:1)
|
||||
- Idempotency module: [src/idempotency/mod.rs](src/idempotency/mod.rs)
|
||||
- [fn detect_existing_state() -> Result<Option<StateReport>>](src/idempotency/mod.rs:1)
|
||||
- [fn is_empty_disk(disk: &Disk) -> Result<bool>](src/idempotency/mod.rs:1)
|
||||
- CLI module: [src/cli/args.rs](src/cli/args.rs)
|
||||
- [fn from_args() -> Cli](src/cli/args.rs:1)
|
||||
- Orchestrator: [src/orchestrator/run.rs](src/orchestrator/run.rs)
|
||||
- [fn run(ctx: &Context) -> Result<()>](src/orchestrator/run.rs:1)
|
||||
- Logging module: [src/logging/mod.rs](../src/logging/mod.rs)
|
||||
- [fn init_logging(opts: &LogOptions) -> Result<()>](../src/logging/mod.rs:1)
|
||||
- Report module: [src/report/state.rs](../src/report/state.rs)
|
||||
- [const REPORT_VERSION: &str](../src/report/state.rs:1)
|
||||
- [fn build_report(...) -> StateReport](../src/report/state.rs:1)
|
||||
- [fn write_report(report: &StateReport) -> Result<()>](../src/report/state.rs:1)
|
||||
- Device module: [src/device/discovery.rs](../src/device/discovery.rs)
|
||||
- [fn discover(filter: &DeviceFilter) -> Result<Vec<Disk>>](../src/device/discovery.rs:1)
|
||||
- Partitioning module: [src/partition/plan.rs](../src/partition/plan.rs)
|
||||
- [fn plan_partitions(disks: &[Disk], cfg: &Config) -> Result<PartitionPlan>](../src/partition/plan.rs:1)
|
||||
- [fn apply_partitions(plan: &PartitionPlan) -> Result<Vec<PartitionResult>>](../src/partition/plan.rs:1)
|
||||
- Filesystems module: [src/fs/plan.rs](../src/fs/plan.rs)
|
||||
- [fn plan_filesystems(disks: &[Disk], parts: &[PartitionResult], cfg: &Config) -> Result<FsPlan>](../src/fs/plan.rs:1)
|
||||
- [fn make_filesystems(plan: &FsPlan) -> Result<Vec<FsResult>>](../src/fs/plan.rs:1)
|
||||
- Mount module: [src/mount/ops.rs](../src/mount/ops.rs)
|
||||
- [fn plan_mounts(fs_results: &[FsResult], cfg: &Config) -> Result<MountPlan>](../src/mount/ops.rs:1)
|
||||
- [fn apply_mounts(plan: &MountPlan) -> Result<Vec<MountResult>>](../src/mount/ops.rs:1)
|
||||
- [fn maybe_write_fstab(mounts: &[MountResult], cfg: &Config) -> Result<()>](../src/mount/ops.rs:1)
|
||||
- Idempotency module: [src/idempotency/mod.rs](../src/idempotency/mod.rs)
|
||||
- [fn detect_existing_state() -> Result<Option<StateReport>>](../src/idempotency/mod.rs:1)
|
||||
- [fn is_empty_disk(disk: &Disk) -> Result<bool>](../src/idempotency/mod.rs:1)
|
||||
- CLI module: [src/cli/args.rs](../src/cli/args.rs)
|
||||
- [fn from_args() -> Cli](../src/cli/args.rs:1)
|
||||
- Orchestrator: [src/orchestrator/run.rs](../src/orchestrator/run.rs)
|
||||
- [fn run(ctx: &Context) -> Result<()>](../src/orchestrator/run.rs:1)
|
||||
|
||||
---
|
||||
|
||||
@@ -39,7 +39,7 @@ Goals
|
||||
|
||||
Configuration
|
||||
- Levels: error, warn, info, debug (default info).
|
||||
- Propagation: single global initialization via [fn init_logging](src/logging/mod.rs:1). Subsequent calls must be no-ops.
|
||||
- Propagation: single global initialization via [fn init_logging](../src/logging/mod.rs:1). Subsequent calls must be no-ops.
|
||||
|
||||
Implementation notes
|
||||
- Use tracing and tracing-subscriber.
|
||||
@@ -57,7 +57,7 @@ Location
|
||||
- Default output: /run/zosstorage/state.json
|
||||
|
||||
Versioning
|
||||
- Include a top-level string field version equal to [REPORT_VERSION](src/report/state.rs:1). Start with v1.
|
||||
- Include a top-level string field version equal to [REPORT_VERSION](../src/report/state.rs:1). Start with v1.
|
||||
|
||||
Schema example
|
||||
|
||||
@@ -154,17 +154,17 @@ Default exclude patterns
|
||||
- ^/dev/fd\\d+$
|
||||
|
||||
Selection policy
|
||||
- Compile include and exclude regex into [DeviceFilter](src/device/discovery.rs).
|
||||
- Compile include and exclude regex into [DeviceFilter](../src/device/discovery.rs).
|
||||
- Enumerate device candidates and apply:
|
||||
- Must match at least one include.
|
||||
- Must not match any exclude.
|
||||
- Must be larger than min_size_gib (default 10).
|
||||
- Probing
|
||||
- Gather size, rotational flag, model, serial when available.
|
||||
- Expose via [struct Disk](src/device/discovery.rs:1).
|
||||
- Expose via [struct Disk](../src/device/discovery.rs:1).
|
||||
|
||||
No eligible disks
|
||||
- Return a specific error variant in [enum Error](src/errors.rs:1).
|
||||
- Return a specific error variant in [enum Error](../src/errors.rs:1).
|
||||
|
||||
---
|
||||
|
||||
@@ -181,17 +181,20 @@ Layout defaults
|
||||
- Cache partitions (only in ssd_hdd_bcachefs): GPT name zoscache on SSD.
|
||||
|
||||
Per-topology specifics
|
||||
- single: All roles on the single disk.
|
||||
- dual_independent: Each disk gets BIOS boot + ESP + data.
|
||||
- ssd_hdd_bcachefs: SSD gets BIOS boot + ESP + zoscache, HDD gets BIOS boot + ESP + zosdata.
|
||||
- btrfs_single: All roles on the single disk; data formatted as btrfs.
|
||||
- bcachefs_single: All roles on the single disk; data formatted as bcachefs.
|
||||
- dual_independent: On each eligible disk (one or more), create BIOS boot (if applicable), ESP, and data.
|
||||
- bcachefs_2copy: Create data partitions on two or more disks; later formatted as one multi-device bcachefs spanning all data partitions.
|
||||
- ssd_hdd_bcachefs: SSD gets BIOS boot + ESP + zoscache; HDD gets BIOS boot + ESP + zosdata; combined later into one bcachefs.
|
||||
- btrfs_raid1: Two disks minimum; data partitions mirrored via btrfs RAID1.
|
||||
|
||||
Safety checks
|
||||
- Ensure unique partition UUIDs.
|
||||
- Verify no pre-existing partitions or signatures. Use blkid or similar via [run_cmd_capture](src/util/mod.rs:1).
|
||||
- After partition creation, run udev settle via [udev_settle](src/util/mod.rs:1).
|
||||
- Verify no pre-existing partitions or signatures. Use blkid or similar via [run_cmd_capture](../src/util/mod.rs:1).
|
||||
- After partition creation, run udev settle via [udev_settle](../src/util/mod.rs:1).
|
||||
|
||||
Application
|
||||
- Utilize sgdisk helpers in [apply_partitions](src/partition/plan.rs:1).
|
||||
- Utilize sgdisk helpers in [apply_partitions](../src/partition/plan.rs:1).
|
||||
|
||||
---
|
||||
|
||||
@@ -199,34 +202,38 @@ Application
|
||||
|
||||
Kinds
|
||||
- Vfat for ESP, label ZOSBOOT.
|
||||
- Btrfs for data on single and dual_independent.
|
||||
- Bcachefs for ssd_hdd_bcachefs (SSD cache, HDD backing).
|
||||
- Btrfs for data in btrfs_single, dual_independent, and btrfs_raid1 (with RAID1 profile).
|
||||
- Bcachefs for data in bcachefs_single, ssd_hdd_bcachefs (SSD cache + HDD backing), and bcachefs_2copy (multi-device).
|
||||
- All data filesystems use label ZOSDATA.
|
||||
|
||||
Defaults
|
||||
- btrfs: compression zstd:3, raid_profile none unless explicitly set to raid1 in btrfs_raid1 mode.
|
||||
- bcachefs: cache_mode promote, compression zstd, checksum crc32c.
|
||||
- btrfs: compression zstd:3, raid_profile none unless explicitly set; for btrfs_raid1 use -m raid1 -d raid1.
|
||||
- bcachefs: cache_mode promote, compression zstd, checksum crc32c; for bcachefs_2copy use `--replicas=2` (data and metadata).
|
||||
- vfat: ESP label ZOSBOOT.
|
||||
|
||||
Planning and execution
|
||||
- Decide mapping of [PartitionResult](src/partition/plan.rs:1) to [FsSpec](src/fs/plan.rs:1) in [plan_filesystems](src/fs/plan.rs:1).
|
||||
- Create filesystems in [make_filesystems](src/fs/plan.rs:1) through wrapped mkfs tools.
|
||||
- Capture resulting identifiers (fs uuid, label) in [FsResult](src/fs/plan.rs:1).
|
||||
- Decide mapping of [PartitionResult](../src/partition/plan.rs:1) to [FsSpec](../src/fs/plan.rs:1) in [plan_filesystems](../src/fs/plan.rs:1).
|
||||
- Create filesystems in [make_filesystems](../src/fs/plan.rs:1) through wrapped mkfs tools.
|
||||
- Capture resulting identifiers (fs uuid, label) in [FsResult](../src/fs/plan.rs:1).
|
||||
|
||||
---
|
||||
|
||||
## 6. Mount scheme and fstab policy
|
||||
|
||||
Scheme
|
||||
- per_uuid under /var/cache: directories named as filesystem UUIDs.
|
||||
Runtime root mounts (all data filesystems)
|
||||
- Each data filesystem is root-mounted at `/var/mounts/{UUID}` (runtime only).
|
||||
- btrfs root mount options: `rw,noatime,subvolid=5`
|
||||
- bcachefs root mount options: `rw,noatime`
|
||||
|
||||
Mount options
|
||||
- btrfs: ssd when non-rotational underlying device, compress from config, defaults otherwise.
|
||||
- vfat: defaults, utf8.
|
||||
Final subvolume/subdir mounts (from the primary data filesystem)
|
||||
- Create or ensure subvolumes named: `system`, `etc`, `modules`, `vm-meta`
|
||||
- Mount targets: `/var/cache/system`, `/var/cache/etc`, `/var/cache/modules`, `/var/cache/vm-meta`
|
||||
- btrfs options: `-o rw,noatime,subvol={name}`
|
||||
- bcachefs options: `-o rw,noatime,X-mount.subdir={name}`
|
||||
|
||||
fstab
|
||||
fstab policy
|
||||
- Disabled by default.
|
||||
- When enabled, [maybe_write_fstab](src/mount/ops.rs:1) writes deterministic entries sorted by target path.
|
||||
- When enabled, [maybe_write_fstab](../src/mount/ops.rs:1) writes only the four final subvolume/subdir entries using `UUID=` sources, in deterministic target order. Root mounts under `/var/mounts` are excluded.
|
||||
|
||||
---
|
||||
|
||||
@@ -235,20 +242,23 @@ fstab
|
||||
Signals for already-provisioned system
|
||||
- Expected GPT names found: zosboot, zosdata, and zoscache when applicable.
|
||||
- Filesystems with labels ZOSBOOT for ESP and ZOSDATA for all data filesystems.
|
||||
- When consistent with selected topology, [detect_existing_state](src/idempotency/mod.rs:1) returns a StateReport and orchestrator exits success without changes.
|
||||
- When consistent with selected topology, [detect_existing_state](../src/idempotency/mod.rs:1) returns a StateReport and orchestrator exits success without changes.
|
||||
|
||||
Disk emptiness
|
||||
- [is_empty_disk](src/idempotency/mod.rs:1) checks for absence of partitions and FS signatures before any modification.
|
||||
- [is_empty_disk](../src/idempotency/mod.rs:1) checks for absence of partitions and FS signatures before any modification.
|
||||
|
||||
---
|
||||
|
||||
## 8. CLI flags and help text outline
|
||||
|
||||
Flags mirrored by [struct Cli](src/cli/args.rs:1) parsed via [from_args](src/cli/args.rs:1)
|
||||
Flags mirrored by [struct Cli](../src/cli/args.rs:1) parsed via [from_args](../src/cli/args.rs:1)
|
||||
- --config PATH
|
||||
- --log-level LEVEL error | warn | info | debug
|
||||
- --log-to-file
|
||||
- --fstab enable fstab generation
|
||||
- --show print preview JSON to stdout (non-destructive)
|
||||
- --report PATH write preview JSON to file (non-destructive)
|
||||
- --apply perform partitioning, filesystem creation, and mounts (DESTRUCTIVE)
|
||||
- --force present but returns unimplemented error
|
||||
|
||||
Kernel cmdline
|
||||
@@ -257,7 +267,7 @@ Kernel cmdline
|
||||
Help text sections
|
||||
- NAME, SYNOPSIS, DESCRIPTION
|
||||
- CONFIG PRECEDENCE
|
||||
- TOPOLOGIES: single, dual_independent, ssd_hdd_bcachefs, btrfs_raid1
|
||||
- TOPOLOGIES: btrfs_single, bcachefs_single, dual_independent, bcachefs_2copy, ssd_hdd_bcachefs, btrfs_raid1
|
||||
- SAFETY AND IDEMPOTENCY
|
||||
- REPORTS
|
||||
- EXIT CODES: 0 success or already_provisioned, non-zero on error
|
||||
@@ -267,9 +277,10 @@ Help text sections
|
||||
## 9. Integration testing plan (QEMU KVM)
|
||||
|
||||
Scenarios to scaffold in [tests/](tests/)
|
||||
- Single disk 40 GiB virtio: validates single topology end-to-end smoke.
|
||||
- Dual NVMe 40 GiB each: validates dual_independent topology.
|
||||
- SSD NVMe + HDD virtio: validates ssd_hdd_bcachefs topology.
|
||||
- Single disk 40 GiB virtio: validates btrfs_single topology end-to-end smoke.
|
||||
- Dual NVMe 40 GiB each: validates dual_independent topology (independent btrfs per disk).
|
||||
- SSD NVMe + HDD virtio: validates ssd_hdd_bcachefs topology (bcachefs with SSD cache/promote, HDD backing).
|
||||
- Three disks: validates bcachefs_2copy across data partitions using `--replicas=2`.
|
||||
- Negative: no eligible disks, or non-empty disk should abort.
|
||||
|
||||
Test strategy
|
||||
@@ -281,7 +292,7 @@ Test strategy
|
||||
Artifacts to validate
|
||||
- Presence of expected partition GPT names.
|
||||
- Filesystems created with correct labels.
|
||||
- Mountpoints under /var/cache/<UUID> when running in a VM.
|
||||
- Runtime root mounts under `/var/mounts/{UUID}` and final subvolume targets at `/var/cache/{system,etc,modules,vm-meta}`.
|
||||
- JSON report validates against v1 schema.
|
||||
|
||||
---
|
||||
|
||||
Reference in New Issue
Block a user