# Review: Current Build Flow and RFS Integration Hook Points This document reviews the current Zero-OS Alpine initramfs build flow, identifies reliable sources for kernel versioning and artifacts, and specifies clean integration points for RFS flist generation and later runtime overmounts without modifying existing code paths. ## Build flow overview Primary orchestrator: [scripts/build.sh](scripts/build.sh) Key sourced libraries: - [alpine.sh](scripts/lib/alpine.sh) - [components.sh](scripts/lib/components.sh) - [kernel.sh](scripts/lib/kernel.sh) - [initramfs.sh](scripts/lib/initramfs.sh) - [stages.sh](scripts/lib/stages.sh) - [docker.sh](scripts/lib/docker.sh) Main stages executed (incremental via [stage_run()](scripts/lib/stages.sh:99)): 1) alpine_extract, alpine_configure, alpine_packages 2) alpine_firmware 3) components_build, components_verify 4) kernel_modules 5) init_script, components_copy, zinit_setup 6) modules_setup, modules_copy 7) cleanup, validation 8) initramfs_create, initramfs_test, kernel_build 9) boot_tests ## Where key artifacts come from - Kernel full version: - Derived deterministically using [kernel_get_full_version()](scripts/lib/kernel.sh:14) - Computed as: KERNEL_VERSION from [config/build.conf](config/build.conf) + CONFIG_LOCALVERSION from [config/kernel.config](config/kernel.config) - Example target: 6.12.44-Zero-OS - Built modules in container: - Stage: [kernel_build_modules()](scripts/lib/kernel.sh:228) - Builds and installs into container root: /lib/modules/ - Runs depmod in container and sets: - CONTAINER_MODULES_PATH=/lib/modules/ - KERNEL_FULL_VERSION= - Initramfs modules copy and metadata: - Stage: [initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:846) - Copies selected modules and dep metadata into initramfs under initramfs/lib/modules/ - Firmware content: - Preferred (per user): a full tree at $root/firmware in the dev-container, intended to be packaged as-is - Fallback within build flow: firmware packages installed by [alpine_install_firmware()](scripts/lib/alpine.sh:392) into initramfs/lib/firmware - rfs binary: - Built via [build_rfs()](scripts/lib/components.sh:299) into [components/rfs/target/x86_64-unknown-linux-musl/release/rfs](components/rfs/target/x86_64-unknown-linux-musl/release/rfs) - Also expected to be available on PATH inside dev-container ## udev and module load sequencing at runtime - zinit units present: - udevd: [config/zinit/udevd.yaml](config/zinit/udevd.yaml) - depmod: [config/zinit/depmod.yaml](config/zinit/depmod.yaml) - udev trigger: [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml) calling [udev.sh](config/zinit/init/udev.sh) - initramfs module orchestration: - Module resolution logic: [initramfs_setup_modules()](scripts/lib/initramfs.sh:225) and [initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:313) - Load scripts created for zinit: - stage1: [initramfs_create_module_scripts()](scripts/lib/initramfs.sh:422) emits /etc/zinit/init/stage1-modules.sh - stage2 is currently disabled in config ## Current integration gaps for RFS flists - There is no existing code that: - Packs modules or firmware into RFS flists (.fl sqlite manifests) - Publishes associated content-addressed blobs to a store - Uploads the .fl manifest to an S3 bucket (separate from the blob store) - Mounts these flists at runtime prior to udev coldplug ## Reliable inputs for RFS pack - Kernel full version: use [kernel_get_full_version()](scripts/lib/kernel.sh:14) logic (never `uname -r` inside container) - Modules source tree candidates (priority): 1) /lib/modules/ (from [kernel_build_modules()](scripts/lib/kernel.sh:228)) 2) initramfs/lib/modules/ (if container path unavailable; less ideal) - Firmware source tree candidates (priority): 1) $PROJECT_ROOT/firmware (external provided tree; user-preferred) 2) initramfs/lib/firmware (APK-installed fallback) ## S3 configuration needs A new configuration file is required to avoid touching existing code: - Path: config/rfs.conf (to be created) - Required keys: - S3_ENDPOINT (e.g., https://s3.example.com:9000) - S3_REGION - S3_BUCKET - S3_PREFIX (path prefix under bucket for blobs/optionally manifests) - S3_ACCESS_KEY - S3_SECRET_KEY - These values will be consumed by standalone scripts (not existing build flow) ## Proposed standalone scripts (no existing code changes) Directory: scripts/rfs - common.sh - Read [config/build.conf](config/build.conf), [config/kernel.config](config/kernel.config) to compute FULL_KERNEL_VERSION - Read [config/rfs.conf](config/rfs.conf) and construct RFS S3 store URI - Detect rfs binary from PATH or [components/rfs](components/rfs) - Locate modules and firmware source trees per the above priority order - pack-modules.sh - Name: modules-.fl - Command: rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/ - Then upload the .fl manifest to s3://BUCKET/PREFIX/manifests/ via aws CLI if available - pack-firmware.sh - Name: firmware-.fl by default, overridable via FIRMWARE_TAG - Source: $PROJECT_ROOT/firmware preferred, else initramfs/lib/firmware - Pack with rfs and upload the .fl manifest similarly - verify-flist.sh - rfs flist inspect dist/flists/NAME.fl - rfs flist tree dist/flists/NAME.fl | head - Optional test mount with a temporary mountpoint when requested ## Future runtime units (deferred) Will be added as new zinit units once flist generation is validated: - Mount firmware flist read-only at /lib/firmware (overmount to hide initramfs firmware beneath) - Mount modules flist read-only at /lib/modules/ - Run depmod -a - Run udev coldplug sequence (reload, trigger add, settle) Placement relative to current units: - Must occur before [udev-trigger.yaml](config/zinit/udev-trigger.yaml) - Should ensure [depmod.yaml](config/zinit/depmod.yaml) is sequenced after modules are available from mount ## Flow summary (Mermaid) ```mermaid flowchart TD A[Build start] --> B[alpine_extract/configure/packages] B --> C[components_build verify] C --> D[kernel_modules install modules in container set KERNEL_FULL_VERSION] D --> E[init_script zinit_setup] E --> F[modules_setup copy] F --> G[cleanup validation] G --> H[initramfs_create test kernel_build] H --> I[boot_tests] subgraph RFS standalone R1[Compute FULL_VERSION from configs] R2[Select sources: modules /lib/modules/FULL_VERSION firmware PROJECT_ROOT/firmware or initramfs/lib/firmware] R3[Pack modules flist rfs pack -s s3://...] R4[Pack firmware flist rfs pack -s s3://...] R5[Upload .fl manifests to S3 manifests/] R6[Verify flists inspect/tree/mount opt] end H -. post-build manual .-> R1 R1 --> R2 --> R3 --> R5 R2 --> R4 --> R5 R3 --> R6 R4 --> R6 ``` ## Conclusion - The existing build flow provides deterministic kernel versioning and installs modules into the container at /lib/modules/, which is ideal for RFS packing. - Firmware can be sourced from the user-provided tree or the initramfs fallback. - RFS flist creation and publishing can be introduced entirely as standalone scripts and configuration without modifying current code. - Runtime overmounting and coldplug can be added later via new zinit units once flist generation is validated.