forked from tfgrid/zosbuilder
Change pack source: install all linux-firmware* packages in container and pack from /lib/firmware via [bash.rfs_common_install_all_alpine_firmware_packages()](scripts/rfs/common.sh:290) used by [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:21). At runtime, overmount firmware flist on /lib/firmware by updating [sh.firmware.sh](config/zinit/init/firmware.sh:10). Update docs to reflect /lib/firmware mount and new pack strategy.
9.1 KiB
9.1 KiB
RFS flist creation and runtime overmounts (design)
Goal
- Produce two flists without modifying existing build scripts:
- firmware-VERSION.fl
- modules-KERNEL_FULL_VERSION.fl
- Store blobs in S3 via rfs store; upload .fl manifest (sqlite) separately to S3.
- Overmount these at runtime later to enable extended hardware, then depmod + udev trigger.
Scope of this change
- Add standalone scripts under scripts/rfs (no changes in existing libs or stages).
- Add a config file config/rfs.conf for S3 credentials and addressing.
- Document the flow and usage here; scripting comes next.
Inputs
- Built kernel modules present in the dev-container (from kernel build stages):
- Preferred: /lib/modules/KERNEL_FULL_VERSION
- Firmware source for RFS pack:
- Install all Alpine linux-firmware* packages into the build container and use /lib/firmware as the source (full set).
- Initramfs fallback (build-time):
- Selective firmware packages installed by bash.alpine_install_firmware() into initramfs/lib/firmware (kept inside the initramfs).
- Kernel version derivation (never use uname -r in container):
- Combine KERNEL_VERSION from config/build.conf and LOCALVERSION from config/kernel.config.
- This matches kernel_get_full_version().
Outputs and locations
- Flists:
- Blobs are uploaded by rfs to the configured S3 store.
- Manifests (.fl sqlite) are uploaded by script as S3 objects (separate from blob store).
Configuration: config/rfs.conf Required values:
- S3_ENDPOINT=https://s3.example.com:9000
- S3_REGION=us-east-1
- S3_BUCKET=zos
- S3_PREFIX=flists/zosbuilder
- S3_ACCESS_KEY=AKIA...
- S3_SECRET_KEY=...
Notes:
- We construct an rfs S3 store URI for pack operations (for blob uploads during pack):
- s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=S3_REGION
- After pack, we correct the flist route URL to include READ-ONLY credentials so mounts can read directly from Garage:
- UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION'
- Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage, ROUTE_ENDPOINT=S3_ENDPOINT (overridable)
Scripts to add (standalone)
-
- Read config/build.conf and config/kernel.config.
- Compute FULL_KERNEL_VERSION exactly as kernel_get_full_version().
- Read and validate config/rfs.conf.
- Build S3 store URI for rfs.
- Locate module and firmware source trees (with priority rules).
- Locate rfs binary (PATH first, fallback to components/rfs/target/x86_64-unknown-linux-musl/release/rfs).
-
- Name: modules-KERNEL_FULL_VERSION.fl (e.g., modules-6.12.44-Zero-OS.fl).
- rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/KERNEL_FULL_VERSION
- Optional: upload dist/flists/modules-...fl to s3://S3_BUCKET/S3_PREFIX/manifests/ using MinIO Client (mc) if present.
-
- Source: $PROJECT_ROOT/firmware if exists, else initramfs/lib/firmware.
- Name: firmware-YYYYMMDD.fl by default; override with FIRMWARE_TAG env to firmware-FIRMWARE_TAG.fl.
- rfs pack as above; optional upload of the .fl manifest using MinIO Client (mc) if present.
-
- rfs flist inspect dist/flists/NAME.fl
- rfs flist tree dist/flists/NAME.fl | head
- Optional: test mount if run with --mount (mountpoint under /tmp).
Runtime (deferred to a follow-up)
- New zinit units to mount and coldplug:
- Mount firmware flist read-only at /lib/firmware
- Mount modules flist at /lib/modules/KERNEL_FULL_VERSION
- Run depmod -a KERNEL_FULL_VERSION
- udevadm control --reload; udevadm trigger --action=add; udevadm settle
- Placement examples (to be created later):
- config/zinit/rfs-modules.yaml
- config/zinit/rfs-firmware.yaml
- Keep in correct dependency order before config/zinit/udev-trigger.yaml.
Naming policy
- modules flist:
- modules-KERNEL_FULL_VERSION.fl
- firmware flist:
- firmware-YYYYMMDD.fl by default
- firmware-FIRMWARE_TAG.fl if env FIRMWARE_TAG is set
Usage flow (after your normal build inside dev-container)
- Create config for S3: config/rfs.conf
- Generate modules flist: scripts/rfs/pack-modules.sh
- Generate firmware flist: scripts/rfs/pack-firmware.sh
- Verify manifests: scripts/rfs/verify-flist.sh dist/flists/modules-...fl
Assumptions
- rfs supports s3 store URIs as described (per components/rfs/README.md).
- The dev-container has the built kernel modules in /lib/modules/KERNEL_FULL_VERSION (as produced via kernel_build_modules()).
- No changes are made to existing build scripts. The new scripts are run on-demand.
Open question for confirm
- Confirm S3 endpoint form (with or without explicit port) and whether we should prefer AWS_REGION env over query param; scripts will support both patterns.
Note on route URL vs HTTP endpoint
- rfs mount reads blobs via s3:// URLs, not via an arbitrary HTTP(S) endpoint. A reverse proxy is not required if you embed read-only S3 credentials in the flist.
- This project now patches the flist after pack to set route.url to a read-only Garage S3 URL:
- Example SQL equivalent:
- UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@[HOST]:3900/blobs?region=garage';
- Example SQL equivalent:
- Configure these in config/rfs.conf:
- READ_ACCESS_KEY / READ_SECRET_KEY: read-only credentials
- ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
- Do not set ROUTE_PATH to S3_PREFIX. ROUTE_PATH is the gateway’s blob route (usually /blobs). S3_PREFIX is only for the pack-time store path.
Runtime units and ordering (zinit)
This repo now includes runtime zinit units and init scripts to mount the RFS flists and perform dual udev coldplug sequences.
-
Early coldplug (before RFS mounts):
- config/zinit/udev-trigger.yaml calls config/zinit/init/udev.sh.
- Runs after depmod/udev daemons to initialize NICs and other devices using what is already in the initramfs.
- Purpose: bring up networking so RFS can reach Garage S3.
-
RFS mounts (daemons, after network):
- config/zinit/rfs-modules.yaml runs config/zinit/init/modules.sh to mount modules-$(uname -r).fl onto /lib/modules/$(uname -r).
- config/zinit/rfs-firmware.yaml runs config/zinit/init/firmware.sh to mount firmware-latest.fl onto /usr/lib/firmware.
- Both are defined as restart: always and include after: network to ensure the Garage S3 route is reachable.
-
Post-mount coldplug (after RFS mounts):
- config/zinit/udev-rfs.yaml performs:
- udevadm control --reload
- udevadm trigger --action=add --type=subsystems
- udevadm trigger --action=add --type=devices
- udevadm settle
- This re-probes hardware so new modules/firmware from the overmounted flists are considered.
- config/zinit/udev-rfs.yaml performs:
-
Embedded manifests in initramfs:
- The build embeds the flists under /etc/rfs:
- modules-KERNEL_FULL_VERSION.fl
- firmware-latest.fl
- Creation happens in scripts/rfs/pack-modules.sh and scripts/rfs/pack-firmware.sh, and embedding is orchestrated by scripts/build.sh.
- The build embeds the flists under /etc/rfs:
Reproducible firmware tagging
- The firmware flist name can be pinned via FIRMWARE_TAG in config/build.conf.
- If set: firmware-FIRMWARE_TAG.fl
- If unset: the build uses firmware-latest.fl for embedding (standalone pack may default to date-based).
- The build logic picks the tag with this precedence:
- Environment FIRMWARE_TAG
- FIRMWARE_TAG from config/build.conf
- "latest"
- Build integration implemented in scripts/build.sh.
Example:
- Set FIRMWARE_TAG in config: add FIRMWARE_TAG="20250908" in config/build.conf
- Or export at build time: export FIRMWARE_TAG="v1"
Verifying flists
Use the helper to inspect a manifest, optionally listing entries and testing a local mount (root + proper FUSE policy required):
- Inspect only:
- scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl
- Inspect + tree:
- scripts/rfs/verify-flist.sh -m dist/flists/firmware-latest.fl --tree
- Inspect + mount test to a temp dir:
- sudo scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl --mount