forked from tfgrid/zosbuilder
Summary
- Implemented plain S3-only flist workflow (no web endpoint). rfs pack uploads blobs using write creds; flist route.url is patched to embed read-only S3 credentials so rfs mount reads directly from S3.
Changes
1) New RFS tooling (scripts/rfs/)
- common.sh:
- Compute FULL_KERNEL_VERSION from configs (no uname).
- Load S3 config and construct pack store URI.
- Build read-only S3 route URL and patch flist (sqlite).
- Helpers to locate modules/firmware trees and rfs binary.
- pack-modules.sh:
- Pack /lib/modules/<FULL_KERNEL_VERSION> to dist/flists/modules-<FULL_KERNEL_VERSION>.fl
- Patch flist route to s3://READ:READ@host:port/ROUTE_PATH?region=ROUTE_REGION (default /blobs, garage).
- Optional upload of .fl using MinIO client (mcli/mc).
- pack-firmware.sh:
- Source firmware from $PROJECT_ROOT/firmware (fallback to initramfs/lib/firmware).
- Pack to dist/flists/firmware-<TAG_OR_DATE>.fl (FIRMWARE_TAG or YYYYMMDD).
- Patch flist route to read-only S3; optional .fl upload via mcli/mc.
- verify-flist.sh:
- rfs flist inspect/tree; optional mount test (best effort).
- patch-stores.sh:
- Helper to patch stores (kept though not used by default).
2) Dev-container (Dockerfile)
- Added sqlite and MinIO client package for manifest patching/upload (expect mcli binary at runtime; scripts support both mcli/mc).
- Retains rustup and musl target for building rfs/zinit/mycelium.
3) Config and examples
- config/rfs.conf.example:
- S3_ENDPOINT/S3_REGION/S3_BUCKET/S3_PREFIX
- S3_ACCESS_KEY/S3_SECRET_KEY (write)
- READ_ACCESS_KEY/READ_SECRET_KEY (read-only)
- ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
- MANIFESTS_SUBPATH, UPLOAD_MANIFESTS (mcli upload optional)
- config/rfs.conf updated by user with real values (not committed here; example included).
- config/modules.conf minor tweak (staged).
4) Zinit mount scripts (config/zinit/init/)
- firmware.sh:
- Mounts firmware-latest.fl over /usr/lib/firmware using rfs mount (env override FIRMWARE_FLIST supported).
- modules.sh:
- Mounts modules-$(uname -r).fl over /lib/modules/$(uname -r) (env override MODULES_FLIST supported).
- Both skip if target already mounted and respect RFS_BIN env.
5) Documentation
- docs/rfs-flists.md:
- End-to-end flow, S3-only route URL patching, mcli upload notes.
- docs/review-rfs-integration.md:
- Integration points, build flow, and post-build standalone usage.
- docs/depmod-behavior.md:
- depmod reads .modinfo; recommend prebuilt modules.*(.bin); use depmod -A only on mismatch.
6) Utility
- scripts/functionlist.md synced with current functions.
Behavioral details
- Pack (write):
s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=REGION
- Flist route (read, post-patch):
s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION
Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage; ROUTE_ENDPOINT derived from S3_ENDPOINT if not set.
Runtime mount examples
- Modules:
rfs mount -m dist/flists/modules-6.12.44-Zero-OS.fl /lib/modules/6.12.44-Zero-OS
- Firmware:
rfs mount -m dist/flists/firmware-YYYYMMDD.fl /usr/lib/firmware
Notes
- FUSE policy: If "allow_other" error occurs, enable user_allow_other in /etc/fuse.conf or run mounts as root.
- WEB_ENDPOINT rewrite is disabled by default (set WEB_ENDPOINT=""). Plain S3 route is embedded in flists.
- MinIO client binary in dev-container is mcli; scripts support mcli (preferred) and mc (fallback).
Files added/modified
- Added: scripts/rfs/{common.sh,pack-modules.sh,pack-firmware.sh,verify-flist.sh,patch-stores.sh}
- Added: config/zinit/init/{firmware.sh,modules.sh}
- Added: docs/{rfs-flists.md,review-rfs-integration.md,depmod-behavior.md}
- Added: config/rfs.conf.example
- Modified: Dockerfile, scripts/functionlist.md, config/modules.conf, config/zinit/sshd-setup.yaml, .gitignore
6.1 KiB
6.1 KiB
RFS flist creation and runtime overmounts (design)
Goal
- Produce two flists without modifying existing build scripts:
- firmware-VERSION.fl
- modules-KERNEL_FULL_VERSION.fl
- Store blobs in S3 via rfs store; upload .fl manifest (sqlite) separately to S3.
- Overmount these at runtime later to enable extended hardware, then depmod + udev trigger.
Scope of this change
- Add standalone scripts under scripts/rfs (no changes in existing libs or stages).
- Add a config file config/rfs.conf for S3 credentials and addressing.
- Document the flow and usage here; scripting comes next.
Inputs
- Built kernel modules present in the dev-container (from kernel build stages):
- Preferred: /lib/modules/KERNEL_FULL_VERSION
- Firmware tree:
- Preferred: $PROJECT_ROOT/firmware (prepopulated tree from dev-container: “$root/firmware”)
- Fallback: initramfs/lib/firmware created by apk install of firmware packages
- Kernel version derivation (never use uname -r in container):
- Combine KERNEL_VERSION from config/build.conf and LOCALVERSION from config/kernel.config.
- This matches kernel_get_full_version().
Outputs and locations
- Flists:
- Blobs are uploaded by rfs to the configured S3 store.
- Manifests (.fl sqlite) are uploaded by script as S3 objects (separate from blob store).
Configuration: config/rfs.conf Required values:
- S3_ENDPOINT=https://s3.example.com:9000
- S3_REGION=us-east-1
- S3_BUCKET=zos
- S3_PREFIX=flists/zosbuilder
- S3_ACCESS_KEY=AKIA...
- S3_SECRET_KEY=...
Notes:
- We construct an rfs S3 store URI for pack operations (for blob uploads during pack):
- s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=S3_REGION
- After pack, we correct the flist route URL to include READ-ONLY credentials so mounts can read directly from Garage:
- UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION'
- Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage, ROUTE_ENDPOINT=S3_ENDPOINT (overridable)
Scripts to add (standalone)
-
- Read config/build.conf and config/kernel.config.
- Compute FULL_KERNEL_VERSION exactly as kernel_get_full_version().
- Read and validate config/rfs.conf.
- Build S3 store URI for rfs.
- Locate module and firmware source trees (with priority rules).
- Locate rfs binary (PATH first, fallback to components/rfs/target/x86_64-unknown-linux-musl/release/rfs).
-
- Name: modules-KERNEL_FULL_VERSION.fl (e.g., modules-6.12.44-Zero-OS.fl).
- rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/KERNEL_FULL_VERSION
- Optional: upload dist/flists/modules-...fl to s3://S3_BUCKET/S3_PREFIX/manifests/ using MinIO Client (mc) if present.
-
- Source: $PROJECT_ROOT/firmware if exists, else initramfs/lib/firmware.
- Name: firmware-YYYYMMDD.fl by default; override with FIRMWARE_TAG env to firmware-FIRMWARE_TAG.fl.
- rfs pack as above; optional upload of the .fl manifest using MinIO Client (mc) if present.
-
- rfs flist inspect dist/flists/NAME.fl
- rfs flist tree dist/flists/NAME.fl | head
- Optional: test mount if run with --mount (mountpoint under /tmp).
Runtime (deferred to a follow-up)
- New zinit units to mount and coldplug:
- Mount firmware flist read-only at /usr/lib/firmware
- Mount modules flist at /lib/modules/KERNEL_FULL_VERSION
- Run depmod -a KERNEL_FULL_VERSION
- udevadm control --reload; udevadm trigger --action=add; udevadm settle
- Placement examples (to be created later):
- config/zinit/rfs-modules.yaml
- config/zinit/rfs-firmware.yaml
- Keep in correct dependency order before config/zinit/udev-trigger.yaml.
Naming policy
- modules flist:
- modules-KERNEL_FULL_VERSION.fl
- firmware flist:
- firmware-YYYYMMDD.fl by default
- firmware-FIRMWARE_TAG.fl if env FIRMWARE_TAG is set
Usage flow (after your normal build inside dev-container)
- Create config for S3: config/rfs.conf
- Generate modules flist: scripts/rfs/pack-modules.sh
- Generate firmware flist: scripts/rfs/pack-firmware.sh
- Verify manifests: scripts/rfs/verify-flist.sh dist/flists/modules-...fl
Assumptions
- rfs supports s3 store URIs as described (per components/rfs/README.md).
- The dev-container has the built kernel modules in /lib/modules/KERNEL_FULL_VERSION (as produced via kernel_build_modules()).
- No changes are made to existing build scripts. The new scripts are run on-demand.
Open question for confirm
- Confirm S3 endpoint form (with or without explicit port) and whether we should prefer AWS_REGION env over query param; scripts will support both patterns.
Note on route URL vs HTTP endpoint
- rfs mount reads blobs via s3:// URLs, not via an arbitrary HTTP(S) endpoint. A reverse proxy is not required if you embed read-only S3 credentials in the flist.
- This project now patches the flist after pack to set route.url to a read-only Garage S3 URL:
- Example SQL equivalent:
- UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@[HOST]:3900/blobs?region=garage';
- Example SQL equivalent:
- Configure these in config/rfs.conf:
- READ_ACCESS_KEY / READ_SECRET_KEY: read-only credentials
- ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
- Do not set ROUTE_PATH to S3_PREFIX. ROUTE_PATH is the gateway’s blob route (usually /blobs). S3_PREFIX is only for the pack-time store path.