Replace inline boot testing with standalone runit.sh runner for clarity: - Remove scripts/lib/testing.sh source and boot_tests stage from build.sh - Remove --skip-tests option from build.sh and rebuild-after-zinit.sh - Update all docs to reference runit.sh for QEMU/cloud-hypervisor testing - Add comprehensive claude.md as AI assistant entry point with guidelines Testing is now fully decoupled from build pipeline; use ./runit.sh for QEMU/cloud-hypervisor validation after builds complete.
7.4 KiB
Review: Current Build Flow and RFS Integration Hook Points
This document reviews the current Zero-OS Alpine initramfs build flow, identifies reliable sources for kernel versioning and artifacts, and specifies clean integration points for RFS flist generation and later runtime overmounts without modifying existing code paths.
Build flow overview
Primary orchestrator: scripts/build.sh
Key sourced libraries:
Main stages executed (incremental via stage_run()):
- alpine_extract, alpine_configure, alpine_packages
- alpine_firmware
- components_build, components_verify
- kernel_modules
- init_script, components_copy, zinit_setup
- modules_setup, modules_copy
- cleanup, validation
- initramfs_create, initramfs_test, kernel_build
- boot_tests
Where key artifacts come from
-
Kernel full version:
- Derived deterministically using kernel_get_full_version()
- Computed as: KERNEL_VERSION from config/build.conf + CONFIG_LOCALVERSION from config/kernel.config
- Example target: 6.12.44-Zero-OS
-
Built modules in container:
- Stage: kernel_build_modules()
- Builds and installs into container root: /lib/modules/<FULL_VERSION>
- Runs depmod in container and sets:
- CONTAINER_MODULES_PATH=/lib/modules/<FULL_VERSION>
- KERNEL_FULL_VERSION=<FULL_VERSION>
-
Initramfs modules copy and metadata:
- Stage: initramfs_copy_resolved_modules()
- Copies selected modules and dep metadata into initramfs under initramfs/lib/modules/<FULL_VERSION>
-
Firmware content:
- Preferred (per user): a full tree at $root/firmware in the dev-container, intended to be packaged as-is
- Fallback within build flow: firmware packages installed by alpine_install_firmware() into initramfs/lib/firmware
-
rfs binary:
- Built via build_rfs() into components/rfs/target/x86_64-unknown-linux-musl/release/rfs
- Also expected to be available on PATH inside dev-container
udev and module load sequencing at runtime
-
zinit units present:
- udevd: config/zinit/udevd.yaml
- depmod: config/zinit/depmod.yaml
- udev trigger: config/zinit/udev-trigger.yaml calling udev.sh
-
initramfs module orchestration:
- Module resolution logic: initramfs_setup_modules() and initramfs_resolve_module_dependencies()
- Load scripts created for zinit:
- stage1: initramfs_create_module_scripts() emits /etc/zinit/init/stage1-modules.sh
- stage2 is currently disabled in config
Current integration gaps for RFS flists
- There is no existing code that:
- Packs modules or firmware into RFS flists (.fl sqlite manifests)
- Publishes associated content-addressed blobs to a store
- Uploads the .fl manifest to an S3 bucket (separate from the blob store)
- Mounts these flists at runtime prior to udev coldplug
Reliable inputs for RFS pack
- Kernel full version: use kernel_get_full_version() logic (never
uname -rinside container) - Modules source tree candidates (priority):
- /lib/modules/<FULL_VERSION> (from kernel_build_modules())
- initramfs/lib/modules/<FULL_VERSION> (if container path unavailable; less ideal)
- Firmware source tree candidates (priority):
- $PROJECT_ROOT/firmware (external provided tree; user-preferred)
- initramfs/lib/firmware (APK-installed fallback)
S3 configuration needs
A new configuration file is required to avoid touching existing code:
- Path: config/rfs.conf (to be created)
- Required keys:
- S3_ENDPOINT (e.g., https://s3.example.com:9000)
- S3_REGION
- S3_BUCKET
- S3_PREFIX (path prefix under bucket for blobs/optionally manifests)
- S3_ACCESS_KEY
- S3_SECRET_KEY
- These values will be consumed by standalone scripts (not existing build flow)
Proposed standalone scripts (no existing code changes)
Directory: scripts/rfs
-
common.sh
- Read config/build.conf, config/kernel.config to compute FULL_KERNEL_VERSION
- Read config/rfs.conf and construct RFS S3 store URI
- Detect rfs binary from PATH or components/rfs
- Locate modules and firmware source trees per the above priority order
-
pack-modules.sh
- Name: modules-<FULL_KERNEL_VERSION>.fl
- Command: rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/<FULL_VERSION>
- Then upload the .fl manifest to s3://BUCKET/PREFIX/manifests/ via aws CLI if available
-
pack-firmware.sh
- Name: firmware-.fl by default, overridable via FIRMWARE_TAG
- Source: $PROJECT_ROOT/firmware preferred, else initramfs/lib/firmware
- Pack with rfs and upload the .fl manifest similarly
-
verify-flist.sh
- rfs flist inspect dist/flists/NAME.fl
- rfs flist tree dist/flists/NAME.fl | head
- Optional test mount with a temporary mountpoint when requested
Future runtime units (deferred)
Will be added as new zinit units once flist generation is validated:
- Mount firmware flist read-only at /lib/firmware (overmount to hide initramfs firmware beneath)
- Mount modules flist read-only at /lib/modules/<FULL_VERSION>
- Run depmod -a <FULL_VERSION>
- Run udev coldplug sequence (reload, trigger add, settle)
Placement relative to current units:
- Must occur before udev-trigger.yaml
- Should ensure depmod.yaml is sequenced after modules are available from mount
Flow summary (Mermaid)
flowchart TD
A[Build start] --> B[alpine_extract/configure/packages]
B --> C[components_build verify]
C --> D[kernel_modules
install modules in container
set KERNEL_FULL_VERSION]
D --> E[init_script zinit_setup]
E --> F[modules_setup copy]
F --> G[cleanup validation]
G --> H[initramfs_create test kernel_build]
H --> I[boot_tests]
subgraph RFS standalone
R1[Compute FULL_VERSION
from configs]
R2[Select sources:
modules /lib/modules/FULL_VERSION
firmware PROJECT_ROOT/firmware or initramfs/lib/firmware]
R3[Pack modules flist
rfs pack -s s3://...]
R4[Pack firmware flist
rfs pack -s s3://...]
R5[Upload .fl manifests
to S3 manifests/]
R6[Verify flists
inspect/tree/mount opt]
end
H -. post-build manual .-> R1
R1 --> R2 --> R3 --> R5
R2 --> R4 --> R5
R3 --> R6
R4 --> R6
Conclusion
- The existing build flow provides deterministic kernel versioning and installs modules into the container at /lib/modules/<FULL_VERSION>, which is ideal for RFS packing.
- Firmware can be sourced from the user-provided tree or the initramfs fallback.
- RFS flist creation and publishing can be introduced entirely as standalone scripts and configuration without modifying current code.
- Runtime overmounting and coldplug can be added later via new zinit units once flist generation is validated.