forked from tfgrid/zosbuilder
feat(rfs): flist pack to S3 + read-only route embedding + zinit mount scripts; docs; dev-container tooling
Summary
- Implemented plain S3-only flist workflow (no web endpoint). rfs pack uploads blobs using write creds; flist route.url is patched to embed read-only S3 credentials so rfs mount reads directly from S3.
Changes
1) New RFS tooling (scripts/rfs/)
- common.sh:
- Compute FULL_KERNEL_VERSION from configs (no uname).
- Load S3 config and construct pack store URI.
- Build read-only S3 route URL and patch flist (sqlite).
- Helpers to locate modules/firmware trees and rfs binary.
- pack-modules.sh:
- Pack /lib/modules/<FULL_KERNEL_VERSION> to dist/flists/modules-<FULL_KERNEL_VERSION>.fl
- Patch flist route to s3://READ:READ@host:port/ROUTE_PATH?region=ROUTE_REGION (default /blobs, garage).
- Optional upload of .fl using MinIO client (mcli/mc).
- pack-firmware.sh:
- Source firmware from $PROJECT_ROOT/firmware (fallback to initramfs/lib/firmware).
- Pack to dist/flists/firmware-<TAG_OR_DATE>.fl (FIRMWARE_TAG or YYYYMMDD).
- Patch flist route to read-only S3; optional .fl upload via mcli/mc.
- verify-flist.sh:
- rfs flist inspect/tree; optional mount test (best effort).
- patch-stores.sh:
- Helper to patch stores (kept though not used by default).
2) Dev-container (Dockerfile)
- Added sqlite and MinIO client package for manifest patching/upload (expect mcli binary at runtime; scripts support both mcli/mc).
- Retains rustup and musl target for building rfs/zinit/mycelium.
3) Config and examples
- config/rfs.conf.example:
- S3_ENDPOINT/S3_REGION/S3_BUCKET/S3_PREFIX
- S3_ACCESS_KEY/S3_SECRET_KEY (write)
- READ_ACCESS_KEY/READ_SECRET_KEY (read-only)
- ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
- MANIFESTS_SUBPATH, UPLOAD_MANIFESTS (mcli upload optional)
- config/rfs.conf updated by user with real values (not committed here; example included).
- config/modules.conf minor tweak (staged).
4) Zinit mount scripts (config/zinit/init/)
- firmware.sh:
- Mounts firmware-latest.fl over /usr/lib/firmware using rfs mount (env override FIRMWARE_FLIST supported).
- modules.sh:
- Mounts modules-$(uname -r).fl over /lib/modules/$(uname -r) (env override MODULES_FLIST supported).
- Both skip if target already mounted and respect RFS_BIN env.
5) Documentation
- docs/rfs-flists.md:
- End-to-end flow, S3-only route URL patching, mcli upload notes.
- docs/review-rfs-integration.md:
- Integration points, build flow, and post-build standalone usage.
- docs/depmod-behavior.md:
- depmod reads .modinfo; recommend prebuilt modules.*(.bin); use depmod -A only on mismatch.
6) Utility
- scripts/functionlist.md synced with current functions.
Behavioral details
- Pack (write):
s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=REGION
- Flist route (read, post-patch):
s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION
Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage; ROUTE_ENDPOINT derived from S3_ENDPOINT if not set.
Runtime mount examples
- Modules:
rfs mount -m dist/flists/modules-6.12.44-Zero-OS.fl /lib/modules/6.12.44-Zero-OS
- Firmware:
rfs mount -m dist/flists/firmware-YYYYMMDD.fl /usr/lib/firmware
Notes
- FUSE policy: If "allow_other" error occurs, enable user_allow_other in /etc/fuse.conf or run mounts as root.
- WEB_ENDPOINT rewrite is disabled by default (set WEB_ENDPOINT=""). Plain S3 route is embedded in flists.
- MinIO client binary in dev-container is mcli; scripts support mcli (preferred) and mc (fallback).
Files added/modified
- Added: scripts/rfs/{common.sh,pack-modules.sh,pack-firmware.sh,verify-flist.sh,patch-stores.sh}
- Added: config/zinit/init/{firmware.sh,modules.sh}
- Added: docs/{rfs-flists.md,review-rfs-integration.md,depmod-behavior.md}
- Added: config/rfs.conf.example
- Modified: Dockerfile, scripts/functionlist.md, config/modules.conf, config/zinit/sshd-setup.yaml, .gitignore
This commit is contained in:
@@ -10,17 +10,17 @@ stage1:virtio_pci:none # Virtio PCI bus
|
||||
stage1:virtio_net:none # Virtio network (VMs, cloud)
|
||||
stage1:virtio_scsi:none # Virtio SCSI (VMs, cloud)
|
||||
stage1:virtio_blk:none # Virtio block (VMs, cloud)
|
||||
stage1:e1000:linux-firmware-intel # Intel E1000 (very common)
|
||||
stage1:e1000e:linux-firmware-intel # Intel E1000E (very common)
|
||||
stage1:r8169:linux-firmware-realtek # Realtek (most common desktop/server)
|
||||
stage1:igb:linux-firmware-intel # Intel Gigabit (servers)
|
||||
stage1:ixgbe:linux-firmware-intel # Intel 10GbE (servers)
|
||||
stage1:i40e:linux-firmware-intel # Intel 40GbE (modern servers)
|
||||
stage1:ice:linux-firmware-intel # Intel E800 series (latest)
|
||||
stage1:e1000:linux-firmware-intel # Intel E1000 (very common)
|
||||
stage1:e1000e:linux-firmware-intel # Intel E1000E (very common)
|
||||
stage1:r8169:linux-firmware-realtek # Realtek (most common desktop/server)
|
||||
stage1:igb:linux-firmware-intel # Intel Gigabit (servers)
|
||||
stage1:ixgbe:linux-firmware-intel # Intel 10GbE (servers)
|
||||
stage1:i40e:linux-firmware-intel # Intel 40GbE (modern servers)
|
||||
stage1:ice:linux-firmware-intel # Intel E800 series (latest)
|
||||
stage1:8139too:none # Realtek 8139 (legacy)
|
||||
stage1:8139cp:none # Realtek 8139C+ (legacy)
|
||||
stage1:bnx2:linux-firmware-bnx2 # Broadcom NetXtreme
|
||||
stage1:bnx2x:linux-firmware-bnx2 # Broadcom NetXtreme II
|
||||
stage1:bnx2:linux-firmware-bnx2 # Broadcom NetXtreme
|
||||
stage1:bnx2x:linux-firmware-bnx2 # Broadcom NetXtreme II
|
||||
stage1:tg3:none # Broadcom Tigon3
|
||||
stage1:b44:none # Broadcom 44xx
|
||||
stage1:atl1:none # Atheros L1
|
||||
@@ -35,4 +35,5 @@ stage1:nvme_core:none # Core NVMe subsystem (REQUIRED)
|
||||
stage1:nvme:none # NVMe storage
|
||||
stage1:tun:none # TUN/TAP for networking
|
||||
stage1:overlay:none # OverlayFS for containers
|
||||
stage1:fuse:none # OverlayFS for containers
|
||||
|
||||
|
||||
57
config/rfs.conf.example
Normal file
57
config/rfs.conf.example
Normal file
@@ -0,0 +1,57 @@
|
||||
# RFS S3 (Garage) configuration for flist storage and HTTP read endpoint
|
||||
# Copy this file to config/rfs.conf and fill in real values (do not commit secrets).
|
||||
|
||||
# S3 API endpoint of your Garage server, including scheme and optional port
|
||||
# Examples:
|
||||
# https://hub.grid.tf
|
||||
# http://minio:9000
|
||||
S3_ENDPOINT="https://hub.grid.tf"
|
||||
|
||||
# AWS region string expected by the S3-compatible API
|
||||
S3_REGION="us-east-1"
|
||||
|
||||
# Bucket and key prefix used for RFS store (content-addressed blobs)
|
||||
# The RFS store path will be: s3://.../<S3_BUCKET>/<S3_PREFIX>
|
||||
S3_BUCKET="zos"
|
||||
S3_PREFIX="zosbuilder/store"
|
||||
|
||||
# Access credentials (required by rfs pack to push blobs)
|
||||
S3_ACCESS_KEY="REPLACE_ME"
|
||||
S3_SECRET_KEY="REPLACE_ME"
|
||||
|
||||
# Optional: HTTP(S) web endpoint used at runtime to fetch blobs without signed S3
|
||||
# This is the base URL that serves the same objects as the S3 store, typically a
|
||||
# public or authenticated gateway in front of Garage that allows read access.
|
||||
# The scripts will patch the .fl (sqlite) stores table to use this endpoint.
|
||||
# Ensure this path maps to the same content-addressed layout expected by rfs.
|
||||
# Example:
|
||||
# https://hub.grid.tf/zos/zosbuilder/store
|
||||
WEB_ENDPOINT="https://hub.grid.tf/zos/zosbuilder/store"
|
||||
|
||||
# Optional: where to upload the .fl manifest sqlite file (separate from blob store)
|
||||
# If you want to keep manifests alongside blobs, a common pattern is:
|
||||
# s3://<S3_BUCKET>/<S3_PREFIX>/manifests/
|
||||
# Scripts will create manifests/ under S3_PREFIX automatically if left default.
|
||||
MANIFESTS_SUBPATH="manifests"
|
||||
|
||||
# Behavior flags (can be overridden by CLI flags or env)
|
||||
# Whether to keep s3:// store as a fallback entry in the .fl after adding WEB_ENDPOINT
|
||||
KEEP_S3_FALLBACK="false"
|
||||
|
||||
# Whether to attempt uploading .fl manifests to S3 (requires MinIO Client: mc)
|
||||
UPLOAD_MANIFESTS="false"
|
||||
|
||||
# Read-only credentials for route URL in manifest (optional; defaults to write keys above)
|
||||
# These will be embedded into the flist 'route.url' so runtime mounts can read directly from Garage.
|
||||
# If not set, scripts fall back to S3_ACCESS_KEY/S3_SECRET_KEY.
|
||||
READ_ACCESS_KEY="REPLACE_ME_READ"
|
||||
READ_SECRET_KEY="REPLACE_ME_READ"
|
||||
|
||||
# Route endpoint and parameters for flist route URL patching
|
||||
# - ROUTE_ENDPOINT: host:port base for Garage gateway (scheme is ignored; host:port is extracted)
|
||||
# If not set, defaults to S3_ENDPOINT
|
||||
# - ROUTE_PATH: path to the blob route (default: /blobs)
|
||||
# - ROUTE_REGION: region string for Garage (default: garage)
|
||||
ROUTE_ENDPOINT="https://hub.grid.tf"
|
||||
ROUTE_PATH="/blobs"
|
||||
ROUTE_REGION="garage"
|
||||
46
config/zinit/init/firmware.sh
Normal file
46
config/zinit/init/firmware.sh
Normal file
@@ -0,0 +1,46 @@
|
||||
#!/bin/sh
|
||||
# rfs mount firmware flist over /usr/lib/firmware (plain S3 route inside the .fl)
|
||||
# Looks for firmware-latest.fl in known locations; can be overridden via FIRMWARE_FLIST env
|
||||
|
||||
set -eu
|
||||
|
||||
log() { echo "[rfs-firmware] $*"; }
|
||||
|
||||
RFS_BIN="${RFS_BIN:-rfs}"
|
||||
TARGET="/usr/lib/firmware"
|
||||
|
||||
# Allow override via env
|
||||
if [ -n "${FIRMWARE_FLIST:-}" ] && [ -f "${FIRMWARE_FLIST}" ]; then
|
||||
FL="${FIRMWARE_FLIST}"
|
||||
else
|
||||
# Candidate paths for the flist manifest
|
||||
for p in \
|
||||
/etc/rfs/firmware-latest.fl \
|
||||
/var/lib/rfs/firmware-latest.fl \
|
||||
/root/firmware-latest.fl \
|
||||
/firmware-latest.fl \
|
||||
; do
|
||||
if [ -f "$p" ]; then
|
||||
FL="$p"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
if [ -z "${FL:-}" ]; then
|
||||
log "firmware-latest.fl not found in known paths; skipping mount"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Ensure target directory exists
|
||||
mkdir -p "$TARGET"
|
||||
|
||||
# Skip if already mounted
|
||||
if mountpoint -q "$TARGET" 2>/dev/null; then
|
||||
log "already mounted: $TARGET"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Perform the mount
|
||||
log "mounting ${FL} -> ${TARGET}"
|
||||
exec "$RFS_BIN" mount -m "$FL" "$TARGET"
|
||||
47
config/zinit/init/modules.sh
Normal file
47
config/zinit/init/modules.sh
Normal file
@@ -0,0 +1,47 @@
|
||||
#!/bin/sh
|
||||
# rfs mount modules flist over /lib/modules/$(uname -r) (plain S3 route embedded in the .fl)
|
||||
# Looks for modules-$(uname -r).fl in known locations; can be overridden via MODULES_FLIST env.
|
||||
|
||||
set -eu
|
||||
|
||||
log() { echo "[rfs-modules] $*"; }
|
||||
|
||||
RFS_BIN="${RFS_BIN:-rfs}"
|
||||
KVER="$(uname -r)"
|
||||
TARGET="/lib/modules/${KVER}"
|
||||
|
||||
# Allow override via env
|
||||
if [ -n "${MODULES_FLIST:-}" ] && [ -f "${MODULES_FLIST}" ]; then
|
||||
FL="${MODULES_FLIST}"
|
||||
else
|
||||
# Candidate paths for the flist manifest
|
||||
for p in \
|
||||
"/etc/rfs/modules-${KVER}.fl" \
|
||||
"/var/lib/rfs/modules-${KVER}.fl" \
|
||||
"/root/modules-${KVER}.fl" \
|
||||
"/modules-${KVER}.fl" \
|
||||
; do
|
||||
if [ -f "$p" ]; then
|
||||
FL="$p"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
if [ -z "${FL:-}" ]; then
|
||||
log "modules-${KVER}.fl not found in known paths; skipping mount"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Ensure target directory exists
|
||||
mkdir -p "$TARGET"
|
||||
|
||||
# Skip if already mounted
|
||||
if mountpoint -q "$TARGET" 2>/dev/null; then
|
||||
log "already mounted: $TARGET"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Perform the mount
|
||||
log "mounting ${FL} -> ${TARGET}"
|
||||
exec "$RFS_BIN" mount -m "$FL" "$TARGET"
|
||||
@@ -1,2 +1,4 @@
|
||||
exec: sh /etc/zinit/init/sshd-setup.sh
|
||||
oneshot: true
|
||||
after:
|
||||
- network
|
||||
|
||||
Reference in New Issue
Block a user