Files
zosbuilder/docs/rfs-flists.md
Jan De Landtsheer 310e11d2bf
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
rfs(firmware): pack full Alpine linux-firmware set from container and overmount /lib/firmware
Change pack source: install all linux-firmware* packages in container and pack from /lib/firmware via [bash.rfs_common_install_all_alpine_firmware_packages()](scripts/rfs/common.sh:290) used by [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:21). At runtime, overmount firmware flist on /lib/firmware by updating [sh.firmware.sh](config/zinit/init/firmware.sh:10). Update docs to reflect /lib/firmware mount and new pack strategy.
2025-09-19 08:27:10 +02:00

9.1 KiB
Raw Blame History

RFS flist creation and runtime overmounts (design)

Goal

  • Produce two flists without modifying existing build scripts:
    • firmware-VERSION.fl
    • modules-KERNEL_FULL_VERSION.fl
  • Store blobs in S3 via rfs store; upload .fl manifest (sqlite) separately to S3.
  • Overmount these at runtime later to enable extended hardware, then depmod + udev trigger.

Scope of this change

  • Add standalone scripts under scripts/rfs (no changes in existing libs or stages).
  • Add a config file config/rfs.conf for S3 credentials and addressing.
  • Document the flow and usage here; scripting comes next.

Inputs

  • Built kernel modules present in the dev-container (from kernel build stages):
    • Preferred: /lib/modules/KERNEL_FULL_VERSION
  • Firmware source for RFS pack:
  • Install all Alpine linux-firmware* packages into the build container and use /lib/firmware as the source (full set).
  • Initramfs fallback (build-time):
  • Selective firmware packages installed by bash.alpine_install_firmware() into initramfs/lib/firmware (kept inside the initramfs).
  • Kernel version derivation (never use uname -r in container):

Outputs and locations

Configuration: config/rfs.conf Required values:

  • S3_ENDPOINT=https://s3.example.com:9000
  • S3_REGION=us-east-1
  • S3_BUCKET=zos
  • S3_PREFIX=flists/zosbuilder
  • S3_ACCESS_KEY=AKIA...
  • S3_SECRET_KEY=...

Notes:

  • We construct an rfs S3 store URI for pack operations (for blob uploads during pack):
    • s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=S3_REGION
  • After pack, we correct the flist route URL to include READ-ONLY credentials so mounts can read directly from Garage:
    • UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION'
    • Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage, ROUTE_ENDPOINT=S3_ENDPOINT (overridable)

Scripts to add (standalone)

Runtime (deferred to a follow-up)

Naming policy

  • modules flist:
    • modules-KERNEL_FULL_VERSION.fl
  • firmware flist:
    • firmware-YYYYMMDD.fl by default
    • firmware-FIRMWARE_TAG.fl if env FIRMWARE_TAG is set

Usage flow (after your normal build inside dev-container)

  1. Create config for S3: config/rfs.conf
  2. Generate modules flist: scripts/rfs/pack-modules.sh
  3. Generate firmware flist: scripts/rfs/pack-firmware.sh
  4. Verify manifests: scripts/rfs/verify-flist.sh dist/flists/modules-...fl

Assumptions

  • rfs supports s3 store URIs as described (per components/rfs/README.md).
  • The dev-container has the built kernel modules in /lib/modules/KERNEL_FULL_VERSION (as produced via kernel_build_modules()).
  • No changes are made to existing build scripts. The new scripts are run on-demand.

Open question for confirm

  • Confirm S3 endpoint form (with or without explicit port) and whether we should prefer AWS_REGION env over query param; scripts will support both patterns.

Note on route URL vs HTTP endpoint

  • rfs mount reads blobs via s3:// URLs, not via an arbitrary HTTP(S) endpoint. A reverse proxy is not required if you embed read-only S3 credentials in the flist.
  • This project now patches the flist after pack to set route.url to a read-only Garage S3 URL:
    • Example SQL equivalent:
      • UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@[HOST]:3900/blobs?region=garage';
  • Configure these in config/rfs.conf:
    • READ_ACCESS_KEY / READ_SECRET_KEY: read-only credentials
    • ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
  • Do not set ROUTE_PATH to S3_PREFIX. ROUTE_PATH is the gateways blob route (usually /blobs). S3_PREFIX is only for the pack-time store path.

Runtime units and ordering (zinit)

This repo now includes runtime zinit units and init scripts to mount the RFS flists and perform dual udev coldplug sequences.

Reproducible firmware tagging

  • The firmware flist name can be pinned via FIRMWARE_TAG in config/build.conf.
    • If set: firmware-FIRMWARE_TAG.fl
    • If unset: the build uses firmware-latest.fl for embedding (standalone pack may default to date-based).
  • The build logic picks the tag with this precedence:
    1. Environment FIRMWARE_TAG
    2. FIRMWARE_TAG from config/build.conf
    3. "latest"
  • Build integration implemented in scripts/build.sh.

Example:

  • Set FIRMWARE_TAG in config: add FIRMWARE_TAG="20250908" in config/build.conf
  • Or export at build time: export FIRMWARE_TAG="v1"

Verifying flists

Use the helper to inspect a manifest, optionally listing entries and testing a local mount (root + proper FUSE policy required):

  • Inspect only:
    • scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl
  • Inspect + tree:
    • scripts/rfs/verify-flist.sh -m dist/flists/firmware-latest.fl --tree
  • Inspect + mount test to a temp dir:
    • sudo scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl --mount