Compare commits

...

3 Commits

Author SHA1 Message Date
9aecfe26ac zinit: stabilize ntp/network/getty runtime
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
• ntp: robust /etc/ntp.conf symlink, safe defaults, avoid nounset, keep BusyBox CLI -p usage

• network: wrap dhcpcd to create dhcpcd user/group if missing; run as root if needed

• console: set getty console to 115200 vt100
2025-09-08 23:54:14 +02:00
652d38abb1 build/rfs: integrate RFS flists + runtime orchestration
• Add standalone RFS tooling: scripts/rfs/common.sh, pack-modules.sh, pack-firmware.sh, verify-flist.sh

• Patch flist route.url with read-only Garage S3 credentials; optional HTTPS store row; optional manifest upload via mcli

• Build integration: stage_rfs_flists in scripts/build.sh to pack and embed manifests under initramfs/etc/rfs

• Runtime: add zinit units rfs-modules (after: network), rfs-firmware (after: network) as daemons; add udev-rfs oneshot post-mount

• Keep early udev-trigger oneshot to coldplug NICs before RFS mounts

• Firmware flist reproducible naming: respect FIRMWARE_TAG from env or config/build.conf, default to latest

• Docs: update docs/rfs-flists.md with runtime ordering, reproducible tagging, verification steps
2025-09-08 23:39:20 +02:00
afd4f4c6f9 feat(rfs): flist pack to S3 + read-only route embedding + zinit mount scripts; docs; dev-container tooling
Summary
- Implemented plain S3-only flist workflow (no web endpoint). rfs pack uploads blobs using write creds; flist route.url is patched to embed read-only S3 credentials so rfs mount reads directly from S3.

Changes
1) New RFS tooling (scripts/rfs/)
   - common.sh:
     - Compute FULL_KERNEL_VERSION from configs (no uname).
     - Load S3 config and construct pack store URI.
     - Build read-only S3 route URL and patch flist (sqlite).
     - Helpers to locate modules/firmware trees and rfs binary.
   - pack-modules.sh:
     - Pack /lib/modules/<FULL_KERNEL_VERSION> to dist/flists/modules-<FULL_KERNEL_VERSION>.fl
     - Patch flist route to s3://READ:READ@host:port/ROUTE_PATH?region=ROUTE_REGION (default /blobs, garage).
     - Optional upload of .fl using MinIO client (mcli/mc).
   - pack-firmware.sh:
     - Source firmware from $PROJECT_ROOT/firmware (fallback to initramfs/lib/firmware).
     - Pack to dist/flists/firmware-<TAG_OR_DATE>.fl (FIRMWARE_TAG or YYYYMMDD).
     - Patch flist route to read-only S3; optional .fl upload via mcli/mc.
   - verify-flist.sh:
     - rfs flist inspect/tree; optional mount test (best effort).
   - patch-stores.sh:
     - Helper to patch stores (kept though not used by default).

2) Dev-container (Dockerfile)
   - Added sqlite and MinIO client package for manifest patching/upload (expect mcli binary at runtime; scripts support both mcli/mc).
   - Retains rustup and musl target for building rfs/zinit/mycelium.

3) Config and examples
   - config/rfs.conf.example:
     - S3_ENDPOINT/S3_REGION/S3_BUCKET/S3_PREFIX
     - S3_ACCESS_KEY/S3_SECRET_KEY (write)
     - READ_ACCESS_KEY/READ_SECRET_KEY (read-only)
     - ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
     - MANIFESTS_SUBPATH, UPLOAD_MANIFESTS (mcli upload optional)
   - config/rfs.conf updated by user with real values (not committed here; example included).
   - config/modules.conf minor tweak (staged).

4) Zinit mount scripts (config/zinit/init/)
   - firmware.sh:
     - Mounts firmware-latest.fl over /usr/lib/firmware using rfs mount (env override FIRMWARE_FLIST supported).
   - modules.sh:
     - Mounts modules-$(uname -r).fl over /lib/modules/$(uname -r) (env override MODULES_FLIST supported).
   - Both skip if target already mounted and respect RFS_BIN env.

5) Documentation
   - docs/rfs-flists.md:
     - End-to-end flow, S3-only route URL patching, mcli upload notes.
   - docs/review-rfs-integration.md:
     - Integration points, build flow, and post-build standalone usage.
   - docs/depmod-behavior.md:
     - depmod reads .modinfo; recommend prebuilt modules.*(.bin); use depmod -A only on mismatch.

6) Utility
   - scripts/functionlist.md synced with current functions.

Behavioral details
- Pack (write):
  s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=REGION
- Flist route (read, post-patch):
  s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION
  Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage; ROUTE_ENDPOINT derived from S3_ENDPOINT if not set.

Runtime mount examples
- Modules:
  rfs mount -m dist/flists/modules-6.12.44-Zero-OS.fl /lib/modules/6.12.44-Zero-OS
- Firmware:
  rfs mount -m dist/flists/firmware-YYYYMMDD.fl /usr/lib/firmware

Notes
- FUSE policy: If "allow_other" error occurs, enable user_allow_other in /etc/fuse.conf or run mounts as root.
- WEB_ENDPOINT rewrite is disabled by default (set WEB_ENDPOINT=""). Plain S3 route is embedded in flists.
- MinIO client binary in dev-container is mcli; scripts support mcli (preferred) and mc (fallback).

Files added/modified
- Added: scripts/rfs/{common.sh,pack-modules.sh,pack-firmware.sh,verify-flist.sh,patch-stores.sh}
- Added: config/zinit/init/{firmware.sh,modules.sh}
- Added: docs/{rfs-flists.md,review-rfs-integration.md,depmod-behavior.md}
- Added: config/rfs.conf.example
- Modified: Dockerfile, scripts/functionlist.md, config/modules.conf, config/zinit/sshd-setup.yaml, .gitignore
2025-09-08 22:51:53 +02:00
28 changed files with 1557 additions and 175 deletions

3
.gitignore vendored
View File

@@ -38,3 +38,6 @@ linux-*.tar.xz
# OS generated files
.DS_Store
Thumbs.db
# files containing secrets
config/rfs.conf

View File

@@ -32,6 +32,8 @@ RUN apk add --no-cache \
elfutils-dev \
ncurses-dev \
kmod \
sqlite \
minio-client \
pahole
# Setup rustup with stable and musl target
@@ -50,6 +52,5 @@ RUN chown builder:builder /workspace
# Set environment variables - rustup handles everything
ENV PATH="/root/.cargo/bin:${PATH}"
ENV RUSTFLAGS="-C target-feature=+crt-static"
CMD ["/bin/bash"]

View File

@@ -34,6 +34,15 @@ DIST_DIR="dist"
ALPINE_MIRROR="https://dl-cdn.alpinelinux.org/alpine"
KERNEL_SOURCE_URL="https://cdn.kernel.org/pub/linux/kernel"
# RFS flists (firmware manifest naming)
# FIRMWARE_TAG controls firmware flist manifest naming for reproducible builds.
# - If set, firmware manifest becomes: firmware-$FIRMWARE_TAG.fl
# - If unset, the build embeds firmware-latest.fl, while standalone pack may default to date-based naming.
# Examples:
# FIRMWARE_TAG="20250908"
# FIRMWARE_TAG="v1"
#FIRMWARE_TAG="latest"
# Feature flags
ENABLE_STRIP="true"
ENABLE_UPX="true"

View File

@@ -35,4 +35,5 @@ stage1:nvme_core:none # Core NVMe subsystem (REQUIRED)
stage1:nvme:none # NVMe storage
stage1:tun:none # TUN/TAP for networking
stage1:overlay:none # OverlayFS for containers
stage1:fuse:none # OverlayFS for containers

View File

@@ -21,6 +21,7 @@ util-linux
# Essential networking (for Zero-OS connectivity)
iproute2
ethtool
nftables
# Filesystem support (minimal)
btrfs-progs

57
config/rfs.conf.example Normal file
View File

@@ -0,0 +1,57 @@
# RFS S3 (Garage) configuration for flist storage and HTTP read endpoint
# Copy this file to config/rfs.conf and fill in real values (do not commit secrets).
# S3 API endpoint of your Garage server, including scheme and optional port
# Examples:
# https://hub.grid.tf
# http://minio:9000
S3_ENDPOINT="https://hub.grid.tf"
# AWS region string expected by the S3-compatible API
S3_REGION="us-east-1"
# Bucket and key prefix used for RFS store (content-addressed blobs)
# The RFS store path will be: s3://.../<S3_BUCKET>/<S3_PREFIX>
S3_BUCKET="zos"
S3_PREFIX="zosbuilder/store"
# Access credentials (required by rfs pack to push blobs)
S3_ACCESS_KEY="REPLACE_ME"
S3_SECRET_KEY="REPLACE_ME"
# Optional: HTTP(S) web endpoint used at runtime to fetch blobs without signed S3
# This is the base URL that serves the same objects as the S3 store, typically a
# public or authenticated gateway in front of Garage that allows read access.
# The scripts will patch the .fl (sqlite) stores table to use this endpoint.
# Ensure this path maps to the same content-addressed layout expected by rfs.
# Example:
# https://hub.grid.tf/zos/zosbuilder/store
WEB_ENDPOINT="https://hub.grid.tf/zos/zosbuilder/store"
# Optional: where to upload the .fl manifest sqlite file (separate from blob store)
# If you want to keep manifests alongside blobs, a common pattern is:
# s3://<S3_BUCKET>/<S3_PREFIX>/manifests/
# Scripts will create manifests/ under S3_PREFIX automatically if left default.
MANIFESTS_SUBPATH="manifests"
# Behavior flags (can be overridden by CLI flags or env)
# Whether to keep s3:// store as a fallback entry in the .fl after adding WEB_ENDPOINT
KEEP_S3_FALLBACK="false"
# Whether to attempt uploading .fl manifests to S3 (requires MinIO Client: mc)
UPLOAD_MANIFESTS="false"
# Read-only credentials for route URL in manifest (optional; defaults to write keys above)
# These will be embedded into the flist 'route.url' so runtime mounts can read directly from Garage.
# If not set, scripts fall back to S3_ACCESS_KEY/S3_SECRET_KEY.
READ_ACCESS_KEY="REPLACE_ME_READ"
READ_SECRET_KEY="REPLACE_ME_READ"
# Route endpoint and parameters for flist route URL patching
# - ROUTE_ENDPOINT: host:port base for Garage gateway (scheme is ignored; host:port is extracted)
# If not set, defaults to S3_ENDPOINT
# - ROUTE_PATH: path to the blob route (default: /blobs)
# - ROUTE_REGION: region string for Garage (default: garage)
ROUTE_ENDPOINT="https://hub.grid.tf"
ROUTE_PATH="/blobs"
ROUTE_REGION="garage"

View File

@@ -1,2 +1,2 @@
exec: /sbin/getty console
exec: /sbin/getty -L 115200 console vt100
restart: always

View File

@@ -0,0 +1,46 @@
#!/bin/sh
# rfs mount firmware flist over /usr/lib/firmware (plain S3 route inside the .fl)
# Looks for firmware-latest.fl in known locations; can be overridden via FIRMWARE_FLIST env
set -eu
log() { echo "[rfs-firmware] $*"; }
RFS_BIN="${RFS_BIN:-rfs}"
TARGET="/usr/lib/firmware"
# Allow override via env
if [ -n "${FIRMWARE_FLIST:-}" ] && [ -f "${FIRMWARE_FLIST}" ]; then
FL="${FIRMWARE_FLIST}"
else
# Candidate paths for the flist manifest
for p in \
/etc/rfs/firmware-latest.fl \
/var/lib/rfs/firmware-latest.fl \
/root/firmware-latest.fl \
/firmware-latest.fl \
; do
if [ -f "$p" ]; then
FL="$p"
break
fi
done
fi
if [ -z "${FL:-}" ]; then
log "firmware-latest.fl not found in known paths; skipping mount"
exit 0
fi
# Ensure target directory exists
mkdir -p "$TARGET"
# Skip if already mounted
if mountpoint -q "$TARGET" 2>/dev/null; then
log "already mounted: $TARGET"
exit 0
fi
# Perform the mount
log "mounting ${FL} -> ${TARGET}"
exec "$RFS_BIN" mount -m "$FL" "$TARGET"

View File

@@ -0,0 +1,47 @@
#!/bin/sh
# rfs mount modules flist over /lib/modules/$(uname -r) (plain S3 route embedded in the .fl)
# Looks for modules-$(uname -r).fl in known locations; can be overridden via MODULES_FLIST env.
set -eu
log() { echo "[rfs-modules] $*"; }
RFS_BIN="${RFS_BIN:-rfs}"
KVER="$(uname -r)"
TARGET="/lib/modules/${KVER}"
# Allow override via env
if [ -n "${MODULES_FLIST:-}" ] && [ -f "${MODULES_FLIST}" ]; then
FL="${MODULES_FLIST}"
else
# Candidate paths for the flist manifest
for p in \
"/etc/rfs/modules-${KVER}.fl" \
"/var/lib/rfs/modules-${KVER}.fl" \
"/root/modules-${KVER}.fl" \
"/modules-${KVER}.fl" \
; do
if [ -f "$p" ]; then
FL="$p"
break
fi
done
fi
if [ -z "${FL:-}" ]; then
log "modules-${KVER}.fl not found in known paths; skipping mount"
exit 0
fi
# Ensure target directory exists
mkdir -p "$TARGET"
# Skip if already mounted
if mountpoint -q "$TARGET" 2>/dev/null; then
log "already mounted: $TARGET"
exit 0
fi
# Perform the mount
log "mounting ${FL} -> ${TARGET}"
exec "$RFS_BIN" mount -m "$FL" "$TARGET"

7
config/zinit/init/network.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/sh
set -e
# Ensure dhcpcd user/group exist (some builds expect to drop privileges)
if ! getent group dhcpcd >/dev/null 2>&1; then addgroup -S dhcpcd 2>/dev/null || true; fi
if ! getent passwd dhcpcd >/dev/null 2>&1; then adduser -S -H -D -s /sbin/nologin -G dhcpcd dhcpcd 2>/dev/null || true; fi
# Exec dhcpcd (will run as root if it cannot drop to dhcpcd user)
exec dhcpcd ""

View File

@@ -1,10 +1,24 @@
#!/bin/sh
set -e
ntp_flags=$(grep -o 'ntp=.*' /proc/cmdline | sed 's/^ntp=//')
# Ensure /etc/ntp.conf exists for tools/hooks expecting it
if [ -f /etc/ntpd.conf ] && [ ! -e /etc/ntp.conf ]; then
ln -sf /etc/ntpd.conf /etc/ntp.conf
fi
# dhcpcd hook may write into /var/lib/ntp
mkdir -p /var/lib/ntp
# Extract ntp=... from kernel cmdline if present
ntp_flags=""
params=""
if [ -n "$ntp_flags" ]; then
params=$(echo "-p $ntp_flags" | sed s/,/' -p '/g)
if [ -n "" ]; then
# Convert comma-separated list into multiple -p args
params="-p "
else
# Sensible defaults when no ntp= is provided
params="-p time.google.com -p time1.google.com -p time2.google.com -p time3.google.com"
fi
exec ntpd -n $params
# BusyBox ntpd uses -p servers on CLI; /etc/ntp.conf symlink above helps alternative daemons.
exec ntpd -n

View File

@@ -1,4 +1,4 @@
exec: dhcpcd eth0
exec: sh /etc/zinit/init/network.sh eth0
after:
- depmod
- udevd

View File

@@ -0,0 +1,4 @@
exec: sh /etc/zinit/init/firmware.sh
restart: always
after:
- network

View File

@@ -0,0 +1,4 @@
exec: sh /etc/zinit/init/modules.sh
restart: always
after:
- network

View File

@@ -1,2 +1,5 @@
exec: /etc/zinit/init/shm.sh
oneshot: true
after:
- firmware
- modules

View File

@@ -1,2 +1,4 @@
exec: sh /etc/zinit/init/sshd-setup.sh
oneshot: true
after:
- network

View File

@@ -0,0 +1,5 @@
exec: /bin/sh -c "udevadm control --reload; udevadm trigger --action=add --type=subsystems; udevadm trigger --action=add --type=devices; udevadm settle"
oneshot: true
after:
- rfs-modules
- rfs-firmware

71
docs/depmod-behavior.md Normal file
View File

@@ -0,0 +1,71 @@
# depmod behavior, impact on lazy-mounted module stores, and flist store rewriting
Summary (short answer)
- depmod builds the modules dependency/alias databases by scanning the modules tree under /lib/modules/<kernel>. It reads metadata from each .ko file (.modinfo section) to generate:
- modules.dep(.bin), modules.alias(.bin), modules.symbols(.bin), modules.devname, modules.order, etc.
- It does not load modules; it opens many files for small reads. On a lazy store, the first depmod run can trigger many object fetches.
- If modules metadata files are already present and consistent (as produced during build), modprobe can work without re-running depmod. Use depmod -A (update only) or skip depmod entirely if timestamps and paths are unchanged.
- For private S3 (garage) without anonymous read, post-process the .fl manifest to replace the store URI with your HTTPS web endpoint for that bucket, so runtime mounts fetch over the web endpoint instead of signed S3.
Details
1) What depmod actually reads/builds
- Inputs scanned under /lib/modules/<kernel>:
- .ko files: depmod reads ELF .modinfo to collect depends=, alias=, vermagic, etc. It does not execute or load modules.
- modules.builtin and modules.builtin.modinfo: indicate built-in drivers so they are excluded from dep graph.
- Optional flags:
- depmod -F <System.map> and -E <Module.symvers> allow symbol/CRC checks; these are typically not required on target systems for generating dependency/alias maps.
- Outputs (consumed by modprobe/kmod):
- modules.dep and modules.dep.bin: dependency lists and fast index
- modules.alias and modules.alias.bin: modalias to module name mapping
- modules.symbols(.bin), modules.devname, modules.order, etc.
Key property: depmods default operation opens many .ko files to read .modinfo, which on a lazy FUSE-backed store causes many small reads.
2) Recommended strategy with lazy flists
- Precompute metadata during build:
- In the dev container, your pipeline already runs depmod (see [kernel_build_modules()](scripts/lib/kernel.sh:228)). Ensure the resulting metadata files in /lib/modules/<kernel> are included in the modules flist.
- At runtime after overmounting the modules flist:
- Option A: Do nothing. If your path is the same (/lib/modules/<kernel>), modprobe will use the precomputed .bin maps and will not need to rescan .ko files. This minimizes object fetches (only when a module is actually loaded).
- Option B: Run depmod -A <kernel> (update only if any .ko newer than modules.dep). This typically performs stats on files and only rebuilds if needed, avoiding a full read of all .ko files.
- Option C: Run depmod -a only if you changed the module set or path layout. Expect many small reads on first run.
3) Firmware implications
- No depmod impact, but udev coldplug will probe devices. Keep firmware files accessible via the firmware flist mount (e.g., /usr/lib/firmware).
- Since firmware loads on-demand by the kernel/driver, the lazy store will fetch only needed blobs.
4) Post-processing .fl to use a web endpoint (garage S3 private)
- Goal: Pack/upload blobs to private S3 using credentials, but ship a manifest (.fl) that references a public HTTPS endpoint (or authenticated gateway) that your rfs mount can fetch from without S3 signing.
- Approach A: Use rfs CLI (if supported) to edit store URIs within the manifest.
- Example (conceptual): rfs flist edit-store -m dist/flists/modules-...fl --set https://web.example.com/bucket/prefix
- Approach B: Use sqlite3 to patch the manifest directly (the .fl is sqlite):
- Inspect stores:
- sqlite3 dist/flists/modules-...fl "SELECT id, uri FROM stores;"
- Replace s3 store with web endpoint:
- sqlite3 dist/flists/modules-...fl "UPDATE stores SET uri='https://web.example.com/bucket/prefix' WHERE uri LIKE 's3://%';"
- Validate:
- rfs flist inspect dist/flists/modules-...fl
- Notes:
- The web endpoint you provide must serve the same content-addressed paths that rfs expects. Confirm the object path layout (e.g., /bucket/prefix/ab/cd/abcdef...).
- You can maintain multiple store rows to provide fallbacks (if rfs supports trying multiple stores).
5) Suggested runtime sequence after overmount (with precomputed metadata)
- Mount modules flist read-only at /lib/modules/<kernel>.
- Optionally depmod -A <kernel> (cheap; no full scan).
- udevadm control --reload; udevadm trigger --action=add; udevadm settle
- Load required baseline modules (stage1) if needed; the lazy store ensures only requested .ko files are fetched.
6) Practical checklist for our scripts
- Ensure pack-modules includes:
- /lib/modules/<kernel>/*.ko*
- All modules.* metadata files (dep, alias, symbols, order, builtin, *.bin)
- After pack completes and blobs are uploaded to S3, patch the .fl manifests stores table to the public HTTPS endpoint of your garage bucket/web gateway.
- Provide verify utilities:
- rfs flist inspect/tree
- Optional local mount test against the web endpoint referenced in the manifest.
Appendix: Commands and flags
- Generate/update metadata (build-time): depmod -a <kernel>
- Fast update at boot: depmod -A <kernel> # only if newer/changed
- Chroot/base path (useful for initramfs image pathing): depmod -b <base> -a <kernel>
- Modprobe uses *.bin maps when present, which avoids parsing large text maps on every lookup.

View File

@@ -0,0 +1,179 @@
# Review: Current Build Flow and RFS Integration Hook Points
This document reviews the current Zero-OS Alpine initramfs build flow, identifies reliable sources for kernel versioning and artifacts, and specifies clean integration points for RFS flist generation and later runtime overmounts without modifying existing code paths.
## Build flow overview
Primary orchestrator: [scripts/build.sh](scripts/build.sh)
Key sourced libraries:
- [alpine.sh](scripts/lib/alpine.sh)
- [components.sh](scripts/lib/components.sh)
- [kernel.sh](scripts/lib/kernel.sh)
- [initramfs.sh](scripts/lib/initramfs.sh)
- [stages.sh](scripts/lib/stages.sh)
- [docker.sh](scripts/lib/docker.sh)
- [testing.sh](scripts/lib/testing.sh)
Main stages executed (incremental via [stage_run()](scripts/lib/stages.sh:99)):
1) alpine_extract, alpine_configure, alpine_packages
2) alpine_firmware
3) components_build, components_verify
4) kernel_modules
5) init_script, components_copy, zinit_setup
6) modules_setup, modules_copy
7) cleanup, validation
8) initramfs_create, initramfs_test, kernel_build
9) boot_tests
## Where key artifacts come from
- Kernel full version:
- Derived deterministically using [kernel_get_full_version()](scripts/lib/kernel.sh:14)
- Computed as: KERNEL_VERSION from [config/build.conf](config/build.conf) + CONFIG_LOCALVERSION from [config/kernel.config](config/kernel.config)
- Example target: 6.12.44-Zero-OS
- Built modules in container:
- Stage: [kernel_build_modules()](scripts/lib/kernel.sh:228)
- Builds and installs into container root: /lib/modules/<FULL_VERSION>
- Runs depmod in container and sets:
- CONTAINER_MODULES_PATH=/lib/modules/<FULL_VERSION>
- KERNEL_FULL_VERSION=<FULL_VERSION>
- Initramfs modules copy and metadata:
- Stage: [initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:846)
- Copies selected modules and dep metadata into initramfs under initramfs/lib/modules/<FULL_VERSION>
- Firmware content:
- Preferred (per user): a full tree at $root/firmware in the dev-container, intended to be packaged as-is
- Fallback within build flow: firmware packages installed by [alpine_install_firmware()](scripts/lib/alpine.sh:392) into initramfs/lib/firmware
- rfs binary:
- Built via [build_rfs()](scripts/lib/components.sh:299) into [components/rfs/target/x86_64-unknown-linux-musl/release/rfs](components/rfs/target/x86_64-unknown-linux-musl/release/rfs)
- Also expected to be available on PATH inside dev-container
## udev and module load sequencing at runtime
- zinit units present:
- udevd: [config/zinit/udevd.yaml](config/zinit/udevd.yaml)
- depmod: [config/zinit/depmod.yaml](config/zinit/depmod.yaml)
- udev trigger: [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml) calling [udev.sh](config/zinit/init/udev.sh)
- initramfs module orchestration:
- Module resolution logic: [initramfs_setup_modules()](scripts/lib/initramfs.sh:225) and [initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:313)
- Load scripts created for zinit:
- stage1: [initramfs_create_module_scripts()](scripts/lib/initramfs.sh:422) emits /etc/zinit/init/stage1-modules.sh
- stage2 is currently disabled in config
## Current integration gaps for RFS flists
- There is no existing code that:
- Packs modules or firmware into RFS flists (.fl sqlite manifests)
- Publishes associated content-addressed blobs to a store
- Uploads the .fl manifest to an S3 bucket (separate from the blob store)
- Mounts these flists at runtime prior to udev coldplug
## Reliable inputs for RFS pack
- Kernel full version: use [kernel_get_full_version()](scripts/lib/kernel.sh:14) logic (never `uname -r` inside container)
- Modules source tree candidates (priority):
1) /lib/modules/<FULL_VERSION> (from [kernel_build_modules()](scripts/lib/kernel.sh:228))
2) initramfs/lib/modules/<FULL_VERSION> (if container path unavailable; less ideal)
- Firmware source tree candidates (priority):
1) $PROJECT_ROOT/firmware (external provided tree; user-preferred)
2) initramfs/lib/firmware (APK-installed fallback)
## S3 configuration needs
A new configuration file is required to avoid touching existing code:
- Path: config/rfs.conf (to be created)
- Required keys:
- S3_ENDPOINT (e.g., https://s3.example.com:9000)
- S3_REGION
- S3_BUCKET
- S3_PREFIX (path prefix under bucket for blobs/optionally manifests)
- S3_ACCESS_KEY
- S3_SECRET_KEY
- These values will be consumed by standalone scripts (not existing build flow)
## Proposed standalone scripts (no existing code changes)
Directory: scripts/rfs
- common.sh
- Read [config/build.conf](config/build.conf), [config/kernel.config](config/kernel.config) to compute FULL_KERNEL_VERSION
- Read [config/rfs.conf](config/rfs.conf) and construct RFS S3 store URI
- Detect rfs binary from PATH or [components/rfs](components/rfs)
- Locate modules and firmware source trees per the above priority order
- pack-modules.sh
- Name: modules-<FULL_KERNEL_VERSION>.fl
- Command: rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/<FULL_VERSION>
- Then upload the .fl manifest to s3://BUCKET/PREFIX/manifests/ via aws CLI if available
- pack-firmware.sh
- Name: firmware-<YYYYMMDD>.fl by default, overridable via FIRMWARE_TAG
- Source: $PROJECT_ROOT/firmware preferred, else initramfs/lib/firmware
- Pack with rfs and upload the .fl manifest similarly
- verify-flist.sh
- rfs flist inspect dist/flists/NAME.fl
- rfs flist tree dist/flists/NAME.fl | head
- Optional test mount with a temporary mountpoint when requested
## Future runtime units (deferred)
Will be added as new zinit units once flist generation is validated:
- Mount firmware flist read-only at /usr/lib/firmware
- Mount modules flist read-only at /lib/modules/<FULL_VERSION>
- Run depmod -a <FULL_VERSION>
- Run udev coldplug sequence (reload, trigger add, settle)
Placement relative to current units:
- Must occur before [udev-trigger.yaml](config/zinit/udev-trigger.yaml)
- Should ensure [depmod.yaml](config/zinit/depmod.yaml) is sequenced after modules are available from mount
## Flow summary (Mermaid)
```mermaid
flowchart TD
A[Build start] --> B[alpine_extract/configure/packages]
B --> C[components_build verify]
C --> D[kernel_modules
install modules in container
set KERNEL_FULL_VERSION]
D --> E[init_script zinit_setup]
E --> F[modules_setup copy]
F --> G[cleanup validation]
G --> H[initramfs_create test kernel_build]
H --> I[boot_tests]
subgraph RFS standalone
R1[Compute FULL_VERSION
from configs]
R2[Select sources:
modules /lib/modules/FULL_VERSION
firmware PROJECT_ROOT/firmware or initramfs/lib/firmware]
R3[Pack modules flist
rfs pack -s s3://...]
R4[Pack firmware flist
rfs pack -s s3://...]
R5[Upload .fl manifests
to S3 manifests/]
R6[Verify flists
inspect/tree/mount opt]
end
H -. post-build manual .-> R1
R1 --> R2 --> R3 --> R5
R2 --> R4 --> R5
R3 --> R6
R4 --> R6
```
## Conclusion
- The existing build flow provides deterministic kernel versioning and installs modules into the container at /lib/modules/<FULL_VERSION>, which is ideal for RFS packing.
- Firmware can be sourced from the user-provided tree or the initramfs fallback.
- RFS flist creation and publishing can be introduced entirely as standalone scripts and configuration without modifying current code.
- Runtime overmounting and coldplug can be added later via new zinit units once flist generation is validated.

166
docs/rfs-flists.md Normal file
View File

@@ -0,0 +1,166 @@
# RFS flist creation and runtime overmounts (design)
Goal
- Produce two flists without modifying existing build scripts:
- firmware-VERSION.fl
- modules-KERNEL_FULL_VERSION.fl
- Store blobs in S3 via rfs store; upload .fl manifest (sqlite) separately to S3.
- Overmount these at runtime later to enable extended hardware, then depmod + udev trigger.
Scope of this change
- Add standalone scripts under [scripts/rfs](scripts/rfs) (no changes in existing libs or stages).
- Add a config file [config/rfs.conf](config/rfs.conf) for S3 credentials and addressing.
- Document the flow and usage here; scripting comes next.
Inputs
- Built kernel modules present in the dev-container (from kernel build stages):
- Preferred: /lib/modules/KERNEL_FULL_VERSION
- Firmware tree:
- Preferred: $PROJECT_ROOT/firmware (prepopulated tree from dev-container: “$root/firmware”)
- Fallback: initramfs/lib/firmware created by apk install of firmware packages
- Kernel version derivation (never use uname -r in container):
- Combine KERNEL_VERSION from [config/build.conf](config/build.conf) and LOCALVERSION from [config/kernel.config](config/kernel.config).
- This matches [kernel_get_full_version()](scripts/lib/kernel.sh:14).
Outputs and locations
- Flists:
- [dist/flists/firmware-VERSION.fl](dist/flists/firmware-VERSION.fl)
- [dist/flists/modules-KERNEL_FULL_VERSION.fl](dist/flists/modules-KERNEL_FULL_VERSION.fl)
- Blobs are uploaded by rfs to the configured S3 store.
- Manifests (.fl sqlite) are uploaded by script as S3 objects (separate from blob store).
Configuration: [config/rfs.conf](config/rfs.conf)
Required values:
- S3_ENDPOINT=https://s3.example.com:9000
- S3_REGION=us-east-1
- S3_BUCKET=zos
- S3_PREFIX=flists/zosbuilder
- S3_ACCESS_KEY=AKIA...
- S3_SECRET_KEY=...
Notes:
- We construct an rfs S3 store URI for pack operations (for blob uploads during pack):
- s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=S3_REGION
- After pack, we correct the flist route URL to include READ-ONLY credentials so mounts can read directly from Garage:
- UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION'
- Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage, ROUTE_ENDPOINT=S3_ENDPOINT (overridable)
Scripts to add (standalone)
- [scripts/rfs/common.sh](scripts/rfs/common.sh)
- Read [config/build.conf](config/build.conf) and [config/kernel.config](config/kernel.config).
- Compute FULL_KERNEL_VERSION exactly as [kernel_get_full_version()](scripts/lib/kernel.sh:14).
- Read and validate [config/rfs.conf](config/rfs.conf).
- Build S3 store URI for rfs.
- Locate module and firmware source trees (with priority rules).
- Locate rfs binary (PATH first, fallback to [components/rfs/target/x86_64-unknown-linux-musl/release/rfs](components/rfs/target/x86_64-unknown-linux-musl/release/rfs)).
- [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh)
- Name: modules-KERNEL_FULL_VERSION.fl (e.g., modules-6.12.44-Zero-OS.fl).
- rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/KERNEL_FULL_VERSION
- Optional: upload dist/flists/modules-...fl to s3://S3_BUCKET/S3_PREFIX/manifests/ using MinIO Client (mc) if present.
- [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh)
- Source: $PROJECT_ROOT/firmware if exists, else initramfs/lib/firmware.
- Name: firmware-YYYYMMDD.fl by default; override with FIRMWARE_TAG env to firmware-FIRMWARE_TAG.fl.
- rfs pack as above; optional upload of the .fl manifest using MinIO Client (mc) if present.
- [scripts/rfs/verify-flist.sh](scripts/rfs/verify-flist.sh)
- rfs flist inspect dist/flists/NAME.fl
- rfs flist tree dist/flists/NAME.fl | head
- Optional: test mount if run with --mount (mountpoint under /tmp).
Runtime (deferred to a follow-up)
- New zinit units to mount and coldplug:
- Mount firmware flist read-only at /usr/lib/firmware
- Mount modules flist at /lib/modules/KERNEL_FULL_VERSION
- Run depmod -a KERNEL_FULL_VERSION
- udevadm control --reload; udevadm trigger --action=add; udevadm settle
- Placement examples (to be created later):
- [config/zinit/rfs-modules.yaml](config/zinit/rfs-modules.yaml)
- [config/zinit/rfs-firmware.yaml](config/zinit/rfs-firmware.yaml)
- Keep in correct dependency order before [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml).
Naming policy
- modules flist:
- modules-KERNEL_FULL_VERSION.fl
- firmware flist:
- firmware-YYYYMMDD.fl by default
- firmware-FIRMWARE_TAG.fl if env FIRMWARE_TAG is set
Usage flow (after your normal build inside dev-container)
1) Create config for S3: [config/rfs.conf](config/rfs.conf)
2) Generate modules flist: [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh)
3) Generate firmware flist: [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh)
4) Verify manifests: [scripts/rfs/verify-flist.sh](scripts/rfs/verify-flist.sh) dist/flists/modules-...fl
Assumptions
- rfs supports s3 store URIs as described (per [components/rfs/README.md](components/rfs/README.md)).
- The dev-container has the built kernel modules in /lib/modules/KERNEL_FULL_VERSION (as produced via [kernel_build_modules()](scripts/lib/kernel.sh:228)).
- No changes are made to existing build scripts. The new scripts are run on-demand.
Open question for confirm
- Confirm S3 endpoint form (with or without explicit port) and whether we should prefer AWS_REGION env over query param; scripts will support both patterns.
Note on route URL vs HTTP endpoint
- rfs mount reads blobs via s3:// URLs, not via an arbitrary HTTP(S) endpoint. A reverse proxy is not required if you embed read-only S3 credentials in the flist.
- This project now patches the flist after pack to set route.url to a read-only Garage S3 URL:
- Example SQL equivalent:
- UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@[HOST]:3900/blobs?region=garage';
- Configure these in config/rfs.conf:
- READ_ACCESS_KEY / READ_SECRET_KEY: read-only credentials
- ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
- Do not set ROUTE_PATH to S3_PREFIX. ROUTE_PATH is the gateways blob route (usually /blobs). S3_PREFIX is only for the pack-time store path.
## Runtime units and ordering (zinit)
This repo now includes runtime zinit units and init scripts to mount the RFS flists and perform dual udev coldplug sequences.
- Early coldplug (before RFS mounts):
- [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml) calls [config/zinit/init/udev.sh](config/zinit/init/udev.sh).
- Runs after depmod/udev daemons to initialize NICs and other devices using what is already in the initramfs.
- Purpose: bring up networking so RFS can reach Garage S3.
- RFS mounts (daemons, after network):
- [config/zinit/rfs-modules.yaml](config/zinit/rfs-modules.yaml) runs [config/zinit/init/modules.sh](config/zinit/init/modules.sh) to mount modules-$(uname -r).fl onto /lib/modules/$(uname -r).
- [config/zinit/rfs-firmware.yaml](config/zinit/rfs-firmware.yaml) runs [config/zinit/init/firmware.sh](config/zinit/init/firmware.sh) to mount firmware-latest.fl onto /usr/lib/firmware.
- Both are defined as restart: always and include after: network to ensure the Garage S3 route is reachable.
- Post-mount coldplug (after RFS mounts):
- [config/zinit/udev-rfs.yaml](config/zinit/udev-rfs.yaml) performs:
- udevadm control --reload
- udevadm trigger --action=add --type=subsystems
- udevadm trigger --action=add --type=devices
- udevadm settle
- This re-probes hardware so new modules/firmware from the overmounted flists are considered.
- Embedded manifests in initramfs:
- The build embeds the flists under /etc/rfs:
- modules-KERNEL_FULL_VERSION.fl
- firmware-latest.fl
- Creation happens in [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh) and [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh), and embedding is orchestrated by [scripts/build.sh](scripts/build.sh).
## Reproducible firmware tagging
- The firmware flist name can be pinned via FIRMWARE_TAG in [config/build.conf](config/build.conf).
- If set: firmware-FIRMWARE_TAG.fl
- If unset: the build uses firmware-latest.fl for embedding (standalone pack may default to date-based).
- The build logic picks the tag with this precedence:
1) Environment FIRMWARE_TAG
2) FIRMWARE_TAG from [config/build.conf](config/build.conf)
3) "latest"
- Build integration implemented in [scripts/build.sh](scripts/build.sh).
Example:
- Set FIRMWARE_TAG in config: add FIRMWARE_TAG="20250908" in [config/build.conf](config/build.conf)
- Or export at build time: export FIRMWARE_TAG="v1"
## Verifying flists
Use the helper to inspect a manifest, optionally listing entries and testing a local mount (root + proper FUSE policy required):
- Inspect only:
- scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl
- Inspect + tree:
- scripts/rfs/verify-flist.sh -m dist/flists/firmware-latest.fl --tree
- Inspect + mount test to a temp dir:
- sudo scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl --mount

View File

@@ -287,6 +287,62 @@ function main_build_process() {
initramfs_copy_resolved_modules "$INSTALL_DIR" "$FULL_KERNEL_VERSION"
}
# Create RFS flists and embed them into initramfs prior to CPIO
function stage_rfs_flists() {
section_header "Creating RFS flists and embedding into initramfs"
# Ensure FULL_KERNEL_VERSION is available
if [[ -z "${FULL_KERNEL_VERSION:-}" ]]; then
FULL_KERNEL_VERSION=$(kernel_get_full_version "$KERNEL_VERSION" "$KERNEL_CONFIG")
export FULL_KERNEL_VERSION
log_info "Resolved FULL_KERNEL_VERSION: ${FULL_KERNEL_VERSION}"
fi
# Ensure rfs scripts are executable (avoid subshell to preserve quoting)
safe_execute chmod +x ./scripts/rfs/*.sh
# Build modules flist (writes to dist/flists/modules-${FULL_KERNEL_VERSION}.fl)
safe_execute ./scripts/rfs/pack-modules.sh
# Build firmware flist with a reproducible tag:
# Priority: env FIRMWARE_TAG > config/build.conf: FIRMWARE_TAG > "latest"
local fw_tag
if [[ -n "${FIRMWARE_TAG:-}" ]]; then
fw_tag="${FIRMWARE_TAG}"
else
if [[ -f "${CONFIG_DIR}/build.conf" ]]; then
# shellcheck source=/dev/null
source "${CONFIG_DIR}/build.conf"
fi
fw_tag="${FIRMWARE_TAG:-latest}"
fi
log_info "Using firmware tag: ${fw_tag}"
safe_execute env FIRMWARE_TAG="${fw_tag}" ./scripts/rfs/pack-firmware.sh
# Embed flists inside initramfs at /etc/rfs for zinit init scripts
local etc_rfs_dir="${INSTALL_DIR}/etc/rfs"
safe_mkdir "${etc_rfs_dir}"
local modules_fl="dist/flists/modules-${FULL_KERNEL_VERSION}.fl"
if [[ -f "${modules_fl}" ]]; then
safe_execute cp "${modules_fl}" "${etc_rfs_dir}/"
log_info "Embedded modules flist: ${modules_fl} -> ${etc_rfs_dir}/"
else
log_warn "Modules flist not found: ${modules_fl}"
fi
local firmware_fl="dist/flists/firmware-${fw_tag}.fl"
if [[ -f "${firmware_fl}" ]]; then
# Provide canonical name firmware-latest.fl expected by firmware.sh
safe_execute cp "${firmware_fl}" "${etc_rfs_dir}/firmware-latest.fl"
log_info "Embedded firmware flist: ${firmware_fl} -> ${etc_rfs_dir}/firmware-latest.fl"
else
log_warn "Firmware flist not found: ${firmware_fl}"
fi
log_info "RFS flists embedded into initramfs"
}
function stage_cleanup() {
alpine_aggressive_cleanup "$INSTALL_DIR"
}
@@ -336,6 +392,7 @@ function main_build_process() {
stage_run "modules_setup" stage_modules_setup
stage_run "modules_copy" stage_modules_copy
stage_run "cleanup" stage_cleanup
stage_run "rfs_flists" stage_rfs_flists
stage_run "validation" stage_validation
stage_run "initramfs_create" stage_initramfs_create
stage_run "initramfs_test" stage_initramfs_test

View File

@@ -1,167 +1,112 @@
# Function List - scripts/lib Library
This document provides a comprehensive description of all functions available in the `scripts/lib` library that are to be sourced by build scripts.
This document lists all functions currently defined under [scripts/lib](scripts/lib) with their source locations.
## **alpine.sh** - Alpine Linux Operations
## alpine.sh - Alpine Linux operations
File: [scripts/lib/alpine.sh](scripts/lib/alpine.sh)
- [alpine_extract_miniroot()](scripts/lib/alpine.sh:14) - Download and extract Alpine miniroot
- [alpine_setup_chroot()](scripts/lib/alpine.sh:70) - Setup chroot mounts and resolv.conf
- [alpine_cleanup_chroot()](scripts/lib/alpine.sh:115) - Unmount chroot mounts
- [alpine_install_packages()](scripts/lib/alpine.sh:142) - Install packages from packages.list
- [alpine_aggressive_cleanup()](scripts/lib/alpine.sh:211) - Reduce image size by removing docs/locales/etc
- [alpine_configure_repos()](scripts/lib/alpine.sh:321) - Configure APK repositories
- [alpine_configure_system()](scripts/lib/alpine.sh:339) - Configure hostname, hosts, timezone, profile
- [alpine_install_firmware()](scripts/lib/alpine.sh:392) - Install required firmware packages
### Core Functions
- [`alpine_extract_miniroot()`](lib/alpine.sh:14) - Downloads and extracts Alpine miniroot to target directory
- [`alpine_setup_chroot()`](lib/alpine.sh:70) - Sets up chroot environment with essential filesystem mounts
- [`alpine_cleanup_chroot()`](lib/alpine.sh:115) - Unmounts and cleans up chroot environment
- [`alpine_install_packages()`](lib/alpine.sh:142) - Installs packages from packages.list (excludes OpenRC)
- [`alpine_aggressive_cleanup()`](lib/alpine.sh:211) - Removes documentation, locales, dev files for size optimization
- [`alpine_configure_repos()`](lib/alpine.sh:302) - Configures Alpine package repositories
- [`alpine_configure_system()`](lib/alpine.sh:320) - Sets up basic system configuration (hostname, hosts, timezone)
- [`alpine_install_firmware()`](lib/alpine.sh:374) - Installs firmware packages for hardware support
## common.sh - Core utilities
File: [scripts/lib/common.sh](scripts/lib/common.sh)
- [log_info()](scripts/lib/common.sh:31)
- [log_warn()](scripts/lib/common.sh:36)
- [log_error()](scripts/lib/common.sh:41)
- [log_debug()](scripts/lib/common.sh:46)
- [safe_execute()](scripts/lib/common.sh:54)
- [section_header()](scripts/lib/common.sh:79)
- [command_exists()](scripts/lib/common.sh:89)
- [in_container()](scripts/lib/common.sh:94)
- [check_dependencies()](scripts/lib/common.sh:99)
- [safe_mkdir()](scripts/lib/common.sh:142)
- [safe_rmdir()](scripts/lib/common.sh:149)
- [safe_copy()](scripts/lib/common.sh:158)
- [is_absolute_path()](scripts/lib/common.sh:166)
- [resolve_path()](scripts/lib/common.sh:171)
- [get_file_size()](scripts/lib/common.sh:181)
- [wait_for_file()](scripts/lib/common.sh:191)
- [cleanup_on_exit()](scripts/lib/common.sh:205)
## **common.sh** - Core Utilities
## components.sh - Component management
File: [scripts/lib/components.sh](scripts/lib/components.sh)
- [components_parse_sources_conf()](scripts/lib/components.sh:13)
- [components_download_git()](scripts/lib/components.sh:72)
- [components_download_release()](scripts/lib/components.sh:104)
- [components_process_extra_options()](scripts/lib/components.sh:144)
- [components_build_component()](scripts/lib/components.sh:183)
- [components_setup_rust_env()](scripts/lib/components.sh:217)
- [build_zinit()](scripts/lib/components.sh:252)
- [build_rfs()](scripts/lib/components.sh:299)
- [build_mycelium()](scripts/lib/components.sh:346)
- [install_rfs()](scripts/lib/components.sh:386)
- [install_corex()](scripts/lib/components.sh:409)
- [components_verify_installation()](scripts/lib/components.sh:436)
- [components_cleanup()](scripts/lib/components.sh:472)
### Logging Functions
- [`log_info()`](lib/common.sh:31) - Log informational messages with timestamp and color
- [`log_warn()`](lib/common.sh:36) - Log warning messages with timestamp and color
- [`log_error()`](lib/common.sh:41) - Log error messages with timestamp and color
- [`log_debug()`](lib/common.sh:46) - Log debug messages (only when DEBUG=1)
## docker.sh - Container runtime management
File: [scripts/lib/docker.sh](scripts/lib/docker.sh)
- [docker_detect_runtime()](scripts/lib/docker.sh:14)
- [docker_verify_rootless()](scripts/lib/docker.sh:33)
- [docker_build_container()](scripts/lib/docker.sh:47)
- [docker_create_dockerfile()](scripts/lib/docker.sh:65)
- [docker_start_rootless()](scripts/lib/docker.sh:116)
- [docker_run_build()](scripts/lib/docker.sh:154)
- [docker_commit_builder()](scripts/lib/docker.sh:196)
- [docker_cleanup()](scripts/lib/docker.sh:208)
- [docker_check_capabilities()](scripts/lib/docker.sh:248)
- [docker_setup_rootless()](scripts/lib/docker.sh:279)
### Execution and System Functions
- [`safe_execute()`](lib/common.sh:54) - Execute commands with error handling and logging
- [`section_header()`](lib/common.sh:76) - Creates formatted section headers for output
- [`command_exists()`](lib/common.sh:86) - Check if command is available in PATH
- [`in_container()`](lib/common.sh:91) - Detect if running inside a container
- [`check_dependencies()`](lib/common.sh:96) - Verify required tools are installed
## initramfs.sh - Initramfs assembly
File: [scripts/lib/initramfs.sh](scripts/lib/initramfs.sh)
- [initramfs_setup_zinit()](scripts/lib/initramfs.sh:13)
- [initramfs_install_init_script()](scripts/lib/initramfs.sh:70)
- [initramfs_copy_components()](scripts/lib/initramfs.sh:97)
- [initramfs_setup_modules()](scripts/lib/initramfs.sh:225)
- [initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:313)
- [initramfs_create_module_scripts()](scripts/lib/initramfs.sh:422)
- [initramfs_strip_and_upx()](scripts/lib/initramfs.sh:486)
- [initramfs_finalize_customization()](scripts/lib/initramfs.sh:569)
- [initramfs_create_cpio()](scripts/lib/initramfs.sh:642)
- [initramfs_validate()](scripts/lib/initramfs.sh:710)
- [initramfs_test_archive()](scripts/lib/initramfs.sh:809)
- [initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:846)
### File System Operations
- [`safe_mkdir()`](lib/common.sh:139) - Create directories safely with error handling
- [`safe_rmdir()`](lib/common.sh:146) - Remove directories safely with error handling
- [`safe_copy()`](lib/common.sh:155) - Copy files/directories safely with error handling
- [`resolve_path()`](lib/common.sh:168) - Convert relative to absolute paths
- [`get_file_size()`](lib/common.sh:178) - Get human-readable file size
- [`wait_for_file()`](lib/common.sh:188) - Wait for file to exist with timeout
- [`cleanup_on_exit()`](lib/common.sh:202) - Cleanup function for exit traps
## kernel.sh - Kernel building
File: [scripts/lib/kernel.sh](scripts/lib/kernel.sh)
- [kernel_get_full_version()](scripts/lib/kernel.sh:14)
- [kernel_download_source()](scripts/lib/kernel.sh:28)
- [kernel_apply_config()](scripts/lib/kernel.sh:82)
- [kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:129)
- [kernel_build_with_initramfs()](scripts/lib/kernel.sh:174)
- [kernel_build_modules()](scripts/lib/kernel.sh:228)
- [kernel_cleanup()](scripts/lib/kernel.sh:284)
## **components.sh** - ThreeFold Component Management
## stages.sh - Build stage tracking
File: [scripts/lib/stages.sh](scripts/lib/stages.sh)
- [stages_init()](scripts/lib/stages.sh:12)
- [stage_is_completed()](scripts/lib/stages.sh:33)
- [stage_mark_completed()](scripts/lib/stages.sh:48)
- [stage_force_rebuild()](scripts/lib/stages.sh:69)
- [stages_clear_all()](scripts/lib/stages.sh:82)
- [stage_run()](scripts/lib/stages.sh:99)
- [stages_status()](scripts/lib/stages.sh:134)
### Component Processing
- [`components_parse_sources_conf()`](lib/components.sh:13) - Parse and build all components from sources.conf
- [`components_download_git()`](lib/components.sh:72) - Clone Git repositories with specific versions
- [`components_download_release()`](lib/components.sh:104) - Download pre-built release binaries
- [`components_process_extra_options()`](lib/components.sh:144) - Handle rename/extract options for components
- [`components_build_component()`](lib/components.sh:183) - Build component using specified build function
## testing.sh - Boot testing
File: [scripts/lib/testing.sh](scripts/lib/testing.sh)
- [testing_qemu_boot()](scripts/lib/testing.sh:14)
- [testing_qemu_basic_boot()](scripts/lib/testing.sh:55)
- [testing_qemu_serial_boot()](scripts/lib/testing.sh:90)
- [testing_qemu_interactive_boot()](scripts/lib/testing.sh:113)
- [testing_cloud_hypervisor_boot()](scripts/lib/testing.sh:135)
- [testing_cloud_hypervisor_basic()](scripts/lib/testing.sh:171)
- [testing_cloud_hypervisor_serial()](scripts/lib/testing.sh:206)
- [testing_analyze_boot_log()](scripts/lib/testing.sh:227)
- [testing_run_all()](scripts/lib/testing.sh:299)
### Build Environment
- [`components_setup_rust_env()`](lib/components.sh:217) - Configure Rust environment for musl builds
### Component-Specific Build Functions
- [`build_zinit()`](lib/components.sh:252) - Build zinit init system from source (Rust)
- [`build_rfs()`](lib/components.sh:304) - Build rfs (rootfs) from source (Rust)
- [`build_mycelium()`](lib/components.sh:356) - Build mycelium networking from source (Rust, subdirectory)
- [`install_rfs()`](lib/components.sh:401) - Install pre-built rfs binary
- [`install_corex()`](lib/components.sh:427) - Install pre-built corex binary
### Verification and Cleanup
- [`components_verify_installation()`](lib/components.sh:457) - Verify all components were installed correctly
- [`components_cleanup()`](lib/components.sh:493) - Clean build artifacts
## **docker.sh** - Container Runtime Management
### Runtime Detection and Setup
- [`docker_detect_runtime()`](lib/docker.sh:14) - Detect available container runtime (Docker/Podman)
- [`docker_verify_rootless()`](lib/docker.sh:33) - Verify rootless container setup works
- [`docker_check_capabilities()`](lib/docker.sh:209) - Check container runtime capabilities
- [`docker_setup_rootless()`](lib/docker.sh:240) - Setup rootless environment (subuid/subgid)
### Container Image Management
- [`docker_build_container()`](lib/docker.sh:47) - Build container image with build tools
- [`docker_create_dockerfile()`](lib/docker.sh:65) - Create optimized Dockerfile for build environment
- [`docker_commit_builder()`](lib/docker.sh:178) - Commit container state for reuse
- [`docker_cleanup()`](lib/docker.sh:191) - Clean up container images
### Container Execution
- [`docker_start_rootless()`](lib/docker.sh:116) - Start rootless container for building
- [`docker_run_build()`](lib/docker.sh:154) - Run build command in container with proper mounts
## **initramfs.sh** - Initramfs Assembly
### Core Assembly Functions
- [`initramfs_setup_zinit()`](lib/initramfs.sh:13) - Setup zinit as init system (replaces OpenRC completely)
- [`initramfs_install_init_script()`](lib/initramfs.sh:71) - Install critical /init script for initramfs boot
- [`initramfs_setup_modules()`](lib/initramfs.sh:98) - Setup 2-stage module loading with dependencies
### Module Management
- [`initramfs_resolve_module_dependencies()`](lib/initramfs.sh:166) - Recursively resolve module dependencies using modinfo
- [`initramfs_create_module_scripts()`](lib/initramfs.sh:236) - Create stage1/stage2 module loading scripts for zinit
### Optimization and Packaging
- [`initramfs_strip_and_upx()`](lib/initramfs.sh:300) - Strip debug symbols and UPX compress binaries for size optimization
- [`initramfs_create_cpio()`](lib/initramfs.sh:383) - Create final compressed initramfs archive (xz/gzip/zstd/uncompressed)
### Validation and Testing
- [`initramfs_validate()`](lib/initramfs.sh:449) - Validate initramfs contents and structure
- [`initramfs_test_archive()`](lib/initramfs.sh:549) - Test initramfs archive integrity
## **kernel.sh** - Kernel Building
### Source Management
- [`kernel_download_source()`](lib/kernel.sh:14) - Download Linux kernel source code from kernel.org
- [`kernel_apply_config()`](lib/kernel.sh:68) - Apply kernel configuration with embedded initramfs path
- [`kernel_modify_config_for_initramfs()`](lib/kernel.sh:116) - Modify kernel config for embedded initramfs support
### Build Functions
- [`kernel_build_with_initramfs()`](lib/kernel.sh:144) - Build kernel with embedded initramfs (complete process)
- [`kernel_build_modules()`](lib/kernel.sh:203) - Build kernel modules for initramfs inclusion
### Cleanup
- [`kernel_cleanup()`](lib/kernel.sh:242) - Clean kernel build artifacts (with option to keep source)
## **testing.sh** - Virtualization Testing
### QEMU Testing
- [`testing_qemu_boot()`](lib/testing.sh:14) - Test kernel boot with QEMU (multiple modes: basic/serial/interactive)
- [`testing_qemu_basic_boot()`](lib/testing.sh:55) - Basic automated QEMU boot test with timeout
- [`testing_qemu_serial_boot()`](lib/testing.sh:90) - QEMU serial console test for debugging
- [`testing_qemu_interactive_boot()`](lib/testing.sh:114) - Interactive QEMU session (no timeout)
### Cloud Hypervisor Testing
- [`testing_cloud_hypervisor_boot()`](lib/testing.sh:135) - Test with cloud-hypervisor VMM
- [`testing_cloud_hypervisor_basic()`](lib/testing.sh:172) - Basic cloud-hypervisor test with timeout
- [`testing_cloud_hypervisor_serial()`](lib/testing.sh:206) - cloud-hypervisor serial console test
### Analysis and Orchestration
- [`testing_analyze_boot_log()`](lib/testing.sh:228) - Analyze boot logs for success/failure indicators
- [`testing_run_all()`](lib/testing.sh:299) - Run comprehensive test suite (QEMU + cloud-hypervisor)
## Usage Notes
### Function Availability
All functions are exported for sourcing and can be called from any script that sources the respective library file. The common pattern is:
```bash
# Source the library
source "${SCRIPT_DIR}/lib/common.sh"
source "${SCRIPT_DIR}/lib/alpine.sh"
# ... other libraries as needed
# Use the functions
alpine_extract_miniroot "/path/to/target"
components_parse_sources_conf "/path/to/sources.conf" "/path/to/components"
```
### Error Handling
All functions follow consistent error handling patterns:
- Return non-zero exit codes on failure
- Use [`safe_execute()`](lib/common.sh:54) for command execution
- Provide detailed logging via [`log_*()`](lib/common.sh:31) functions
- Clean up resources on failure
### Dependencies
Functions have dependencies on:
- External tools (checked via [`check_dependencies()`](lib/common.sh:96))
- Other library functions (noted in function descriptions)
- Configuration files and environment variables
- Proper directory structures
### Configuration
Most functions respect environment variables for configuration:
- `DEBUG=1` enables debug logging
- `ALPINE_VERSION`, `KERNEL_VERSION` set versions
- `RUST_TARGET` configures Rust builds
- Various `*_DIR` variables set paths

474
scripts/rfs/common.sh Executable file
View File

@@ -0,0 +1,474 @@
#!/bin/bash
# Common helpers for RFS flist creation and manifest patching
# - No changes to existing build pipeline; this library is used by standalone scripts under scripts/rfs
# - Computes FULL_KERNEL_VERSION from configs (never uses uname -r)
# - Loads S3 (garage) config and builds rfs S3 store URI
# - Locates rfs binary and source trees for modules/firmware
# - Provides helper to patch .fl (sqlite) stores table to use HTTPS web endpoint
set -euo pipefail
# Resolve project root from this file location
rfs_common_project_root() {
local here
here="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# scripts/rfs -> project root is two levels up
dirname "$(dirname "$here")"
}
PROJECT_ROOT="${PROJECT_ROOT:-$(rfs_common_project_root)}"
SCRIPT_DIR="${PROJECT_ROOT}/scripts"
LIB_DIR="${SCRIPT_DIR}/lib"
# Bring in logging and helpers if available
if [[ -f "${LIB_DIR}/common.sh" ]]; then
# shellcheck source=/dev/null
source "${LIB_DIR}/common.sh"
else
# Minimal logging fallbacks
log_info() { echo "[INFO] $*"; }
log_warn() { echo "[WARN] $*" >&2; }
log_error() { echo "[ERROR] $*" >&2; }
log_debug() { if [[ "${DEBUG:-0}" == "1" ]]; then echo "[DEBUG] $*"; fi }
safe_execute() { echo "[EXEC] $*"; "$@"; }
fi
# -----------------------------------------------------------------------------
# Config loaders
# -----------------------------------------------------------------------------
# Load build.conf (KERNEL_VERSION, etc.) and compute FULL_KERNEL_VERSION
# FULL_KERNEL_VERSION = KERNEL_VERSION + CONFIG_LOCALVERSION from config/kernel.config
rfs_common_load_build_kernel_version() {
local build_conf="${PROJECT_ROOT}/config/build.conf"
local kcfg="${PROJECT_ROOT}/config/kernel.config"
if [[ -f "$build_conf" ]]; then
# shellcheck source=/dev/null
source "$build_conf"
else
log_error "Missing build config: ${build_conf}"
return 1
fi
local base_ver="${KERNEL_VERSION:-}"
if [[ -z "$base_ver" ]]; then
log_error "KERNEL_VERSION not set in ${build_conf}"
return 1
fi
if [[ ! -f "$kcfg" ]]; then
log_error "Missing kernel config: ${kcfg}"
return 1
fi
# Extract CONFIG_LOCALVERSION="..."; may include leading '-' in value
local localver
localver="$(grep -E '^CONFIG_LOCALVERSION=' "$kcfg" | cut -d'"' -f2 || true)"
local full_ver="${base_ver}${localver}"
if [[ -z "$full_ver" ]]; then
log_error "Failed to compute FULL_KERNEL_VERSION from configs"
return 1
fi
export FULL_KERNEL_VERSION="$full_ver"
log_info "Computed FULL_KERNEL_VERSION: ${FULL_KERNEL_VERSION}"
}
# Load RFS S3 configuration from config/rfs.conf or config/rfs.conf.example
# Required:
# S3_ENDPOINT, S3_REGION, S3_BUCKET, S3_PREFIX, S3_ACCESS_KEY, S3_SECRET_KEY
rfs_common_load_rfs_s3_config() {
local conf_real="${PROJECT_ROOT}/config/rfs.conf"
local conf_example="${PROJECT_ROOT}/config/rfs.conf.example"
if [[ -f "$conf_real" ]]; then
# shellcheck source=/dev/null
source "$conf_real"
log_info "Loaded RFS S3 config: ${conf_real}"
elif [[ -f "$conf_example" ]]; then
# shellcheck source=/dev/null
source "$conf_example"
log_warn "Using example RFS config: ${conf_example} (override with config/rfs.conf)"
else
log_error "No RFS config found. Create config/rfs.conf or config/rfs.conf.example"
return 1
fi
# Allow environment to override sourced values
S3_ENDPOINT="${S3_ENDPOINT:-}"
S3_REGION="${S3_REGION:-}"
S3_BUCKET="${S3_BUCKET:-}"
S3_PREFIX="${S3_PREFIX:-}"
S3_ACCESS_KEY="${S3_ACCESS_KEY:-}"
S3_SECRET_KEY="${S3_SECRET_KEY:-}"
local missing=0
for v in S3_ENDPOINT S3_REGION S3_BUCKET S3_PREFIX S3_ACCESS_KEY S3_SECRET_KEY; do
if [[ -z "${!v}" ]]; then
log_error "Missing required S3 config variable: ${v}"
missing=1
fi
done
if [[ $missing -ne 0 ]]; then
log_error "Incomplete RFS S3 configuration"
return 1
fi
export S3_ENDPOINT S3_REGION S3_BUCKET S3_PREFIX S3_ACCESS_KEY S3_SECRET_KEY
# Validate placeholders are not left as defaults
if [[ "${S3_ACCESS_KEY}" == "REPLACE_ME" || "${S3_SECRET_KEY}" == "REPLACE_ME" ]]; then
log_error "S3_ACCESS_KEY / S3_SECRET_KEY in config/rfs.conf are placeholders. Please set real credentials."
return 1
fi
# Optional read-only credentials for route URL; default to write keys if not provided
READ_ACCESS_KEY="${READ_ACCESS_KEY:-$S3_ACCESS_KEY}"
READ_SECRET_KEY="${READ_SECRET_KEY:-$S3_SECRET_KEY}"
# Garage blob route path (default /blobs)
ROUTE_PATH="${ROUTE_PATH:-/blobs}"
export READ_ACCESS_KEY READ_SECRET_KEY ROUTE_PATH
}
# Build rfs S3 store URI from loaded S3 config
# Format: s3://ACCESS:SECRET@HOST:PORT/BUCKET/PREFIX?region=REGION
rfs_common_build_s3_store_uri() {
if [[ -z "${S3_ENDPOINT:-}" ]]; then
log_error "S3_ENDPOINT not set; call rfs_common_load_rfs_s3_config first"
return 1
fi
# Strip scheme from endpoint
local hostport="${S3_ENDPOINT#http://}"
hostport="${hostport#https://}"
hostport="${hostport%/}"
# Ensure explicit port; default to Garage S3 port 3900 when missing
if [[ "$hostport" != *:* ]]; then
hostport="${hostport}:3900"
fi
# Minimal percent-encoding for ':' and '@' in credentials
local ak="${S3_ACCESS_KEY//:/%3A}"
ak="${ak//@/%40}"
local sk="${S3_SECRET_KEY//:/%3A}"
sk="${sk//@/%40}"
local path="${S3_BUCKET}/${S3_PREFIX}"
path="${path#/}" # ensure no leading slash duplication
local uri="s3://${ak}:${sk}@${hostport}/${path}?region=${S3_REGION}"
export RFS_S3_STORE_URI="$uri"
log_info "Constructed RFS S3 store URI: ${RFS_S3_STORE_URI}"
}
# -----------------------------------------------------------------------------
# Tool discovery
# -----------------------------------------------------------------------------
# Locate rfs binary: prefer PATH, fallback to components build
rfs_common_locate_rfs() {
if command -v rfs >/dev/null 2>&1; then
export RFS_BIN="$(command -v rfs)"
log_info "Using rfs from PATH: ${RFS_BIN}"
return 0
fi
# Fallback to components
local rtarget
if [[ -f "${PROJECT_ROOT}/config/build.conf" ]]; then
# shellcheck source=/dev/null
source "${PROJECT_ROOT}/config/build.conf"
fi
rtarget="${RUST_TARGET:-x86_64-unknown-linux-musl}"
local candidate="${PROJECT_ROOT}/components/rfs/target/${rtarget}/release/rfs"
if [[ -x "$candidate" ]]; then
export RFS_BIN="$candidate"
log_info "Using rfs from components: ${RFS_BIN}"
return 0
fi
log_error "rfs binary not found. Build it via components stage or install it in PATH."
return 1
}
# Ensure sqlite3 is available (for manifest patch)
rfs_common_require_sqlite3() {
if ! command -v sqlite3 >/dev/null 2>&1; then
log_error "sqlite3 not found. Install sqlite3 to patch .fl manifest stores."
return 1
fi
}
# -----------------------------------------------------------------------------
# Source tree discovery
# -----------------------------------------------------------------------------
# Locate modules directory for FULL_KERNEL_VERSION
# Priority:
# 1) /lib/modules/<FULL_KERNEL_VERSION>
# 2) ${PROJECT_ROOT}/kernel/lib/modules/<FULL_KERNEL_VERSION>
# 3) ${PROJECT_ROOT}/initramfs/lib/modules/<FULL_KERNEL_VERSION>
rfs_common_locate_modules_dir() {
local kver="${1:-${FULL_KERNEL_VERSION:-}}"
if [[ -z "$kver" ]]; then
log_error "rfs_common_locate_modules_dir: FULL_KERNEL_VERSION is empty"
return 1
fi
local candidates=(
"/lib/modules/${kver}"
"${PROJECT_ROOT}/kernel/lib/modules/${kver}"
"${PROJECT_ROOT}/initramfs/lib/modules/${kver}"
)
local d
for d in "${candidates[@]}"; do
if [[ -d "$d" ]]; then
export MODULES_DIR="$d"
log_info "Found modules dir: ${MODULES_DIR}"
return 0
fi
done
log_error "No modules directory found for ${kver}. Checked: ${candidates[*]}"
return 1
}
# Locate firmware directory
# Priority:
# 1) ${PROJECT_ROOT}/firmware
# 2) ${PROJECT_ROOT}/initramfs/lib/firmware
# 3) /lib/firmware
rfs_common_locate_firmware_dir() {
local candidates=(
"${PROJECT_ROOT}/firmware"
"${PROJECT_ROOT}/initramfs/lib/firmware"
"/lib/firmware"
)
local d
for d in "${candidates[@]}"; do
if [[ -d "$d" ]]; then
export FIRMWARE_DIR="$d"
log_info "Found firmware dir: ${FIRMWARE_DIR}"
return 0
fi
done
log_error "No firmware directory found. Checked: ${candidates[*]}"
return 1
}
# Ensure precomputed modules metadata are present (to avoid depmod at boot)
rfs_common_validate_modules_metadata() {
local md="${MODULES_DIR:-}"
if [[ -z "$md" || ! -d "$md" ]]; then
log_error "MODULES_DIR not set or invalid"
return 1
fi
local ok=1
local files=(modules.dep modules.dep.bin modules.alias modules.alias.bin modules.symbols.bin modules.order modules.builtin modules.builtin.modinfo)
local missing=()
for f in "${files[@]}"; do
if [[ ! -f "${md}/${f}" ]]; then
missing+=("$f")
ok=0
fi
done
if [[ $ok -eq 1 ]]; then
log_info "Modules metadata present in ${md}"
return 0
else
log_warn "Missing some modules metadata in ${md}: ${missing[*]}"
# Not fatal; rfs pack can proceed, but boot may require depmod -A or full scan
return 0
fi
}
# -----------------------------------------------------------------------------
# Manifest patching (sqlite .fl)
# -----------------------------------------------------------------------------
# Patch the .fl manifest's stores table to use an HTTPS web endpoint
# Args:
# $1 = path to .fl file
# $2 = HTTPS base (e.g., https://hub.grid.tf/zos/zosbuilder) - no trailing slash
# $3 = keep_s3_fallback ("true"/"false") - if true, retain existing s3:// row(s)
rfs_common_patch_flist_stores() {
local fl="$1"
local web_base="$2"
local keep_s3="${3:-false}"
if [[ ! -f "$fl" ]]; then
log_error "Manifest file not found: ${fl}"
return 1
fi
if [[ -z "$web_base" ]]; then
log_error "Web endpoint base is empty"
return 1
fi
rfs_common_require_sqlite3
# Ensure no trailing slash
web_base="${web_base%/}"
# Heuristic: if stores table exists, update any s3:// URI to the web_base, or insert web_base if none.
local has_table
has_table="$(sqlite3 "$fl" "SELECT name FROM sqlite_master WHERE type='table' AND name='stores';" || true)"
if [[ -z "$has_table" ]]; then
log_error "stores table not found in manifest (unexpected schema): ${fl}"
return 1
fi
# Does any s3 store exist?
local s3_count
s3_count="$(sqlite3 "$fl" "SELECT COUNT(*) FROM stores WHERE uri LIKE 's3://%';" || echo 0)"
if [[ "${keep_s3}" != "true" ]]; then
# Replace all s3://... URIs with the HTTPS web base
log_info "Replacing s3 stores with HTTPS: ${web_base}"
sqlite3 "$fl" "UPDATE stores SET uri='${web_base}' WHERE uri LIKE 's3://%';"
else
# Keep s3, but ensure https row exists and is ordered first if applicable
local https_count
https_count="$(sqlite3 "$fl" "SELECT COUNT(*) FROM stores WHERE uri='${web_base}';" || echo 0)"
if [[ "$https_count" -eq 0 ]]; then
log_info "Adding HTTPS store ${web_base} alongside existing s3 store(s)"
# Attempt simple insert; table schema may include more columns, so try a best-effort approach:
# Assume minimal schema: (id INTEGER PRIMARY KEY, uri TEXT UNIQUE)
# If fails, user can adjust with rfs CLI.
set +e
sqlite3 "$fl" "INSERT OR IGNORE INTO stores(uri) VALUES('${web_base}');"
local rc=$?
set -e
if [[ $rc -ne 0 ]]; then
log_warn "Could not INSERT into stores; schema may be different. Consider using rfs CLI to add store."
fi
else
log_info "HTTPS store already present in manifest"
fi
fi
log_info "Patched stores in manifest: ${fl}"
return 0
}
# -----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# Manifest route URL patching (sqlite .fl) - use read-only credentials
# -----------------------------------------------------------------------------
# Build route URL for the flist 'route' table using read-only keys
# Result example:
# s3://READ_KEY:READ_SECRET@host:3900/blobs?region=garage
rfs_common_build_route_url() {
# Ensure sqlite available for later patch step
rfs_common_require_sqlite3
# Defaults applicable to Garage
local route_region="${ROUTE_REGION:-garage}"
local route_path="${ROUTE_PATH:-/blobs}"
# Derive host:port from ROUTE_ENDPOINT or S3_ENDPOINT
local endpoint="${ROUTE_ENDPOINT:-${S3_ENDPOINT:-}}"
if [[ -z "$endpoint" ]]; then
log_error "No ROUTE_ENDPOINT or S3_ENDPOINT set; cannot build route URL"
return 1
fi
local hostport="${endpoint#http://}"
hostport="${hostport#https://}"
hostport="${hostport%/}"
# Ensure explicit port; default to Garage S3 port 3900 when missing
if [[ "$hostport" != *:* ]]; then
hostport="${hostport}:3900"
fi
# Percent-encode credentials minimally for ':' and '@'
local rak="${READ_ACCESS_KEY//:/%3A}"
rak="${rak//@/%40}"
local rsk="${READ_SECRET_KEY//:/%3A}"
rsk="${rsk//@/%40}"
# Normalize route path (ensure leading slash)
if [[ "$route_path" != /* ]]; then
route_path="/${route_path}"
fi
local url="s3://${rak}:${rsk}@${hostport}${route_path}?region=${route_region}"
export RFS_ROUTE_URL="$url"
log_info "Constructed route URL for flist: ${RFS_ROUTE_URL}"
}
# Patch the 'route' table URL inside the .fl manifest to use read-only key URL
# Args:
# $1 = path to .fl file
rfs_common_patch_flist_route_url() {
local fl="$1"
if [[ -z "${RFS_ROUTE_URL:-}" ]]; then
log_error "RFS_ROUTE_URL is empty; call rfs_common_build_route_url first"
return 1
fi
if [[ ! -f "$fl" ]]; then
log_error "Manifest file not found: ${fl}"
return 1
fi
rfs_common_require_sqlite3
# Ensure 'route' table exists
local has_route
has_route="$(sqlite3 "$fl" "SELECT name FROM sqlite_master WHERE type='table' AND name='route';" || true)"
if [[ -z "$has_route" ]]; then
log_error "route table not found in manifest (unexpected schema): ${fl}"
return 1
fi
log_info "Updating route.url to: ${RFS_ROUTE_URL}"
sqlite3 "$fl" "UPDATE route SET url='${RFS_ROUTE_URL}';"
log_info "Patched route URL in manifest: ${fl}"
}
# Packaging helpers
# -----------------------------------------------------------------------------
# Ensure output directory exists and echo final manifest path
# Args:
# $1 = basename for manifest (e.g., modules-6.12.44-Zero-OS.fl)
rfs_common_prepare_output() {
local base="$1"
local outdir="${PROJECT_ROOT}/dist/flists"
mkdir -p "$outdir"
echo "${outdir}/${base}"
}
# Sanitize firmware tag or generate date-based tag (YYYYMMDD)
rfs_common_firmware_tag() {
local tag="${FIRMWARE_TAG:-}"
if [[ -n "$tag" ]]; then
# Replace path-unfriendly chars
tag="${tag//[^A-Za-z0-9._-]/_}"
echo "$tag"
else
date -u +%Y%m%d
fi
}
# If executed directly, show a quick status summary
if [[ "${BASH_SOURCE[0]}" == "$0" ]]; then
log_info "rfs-common self-check..."
rfs_common_load_build_kernel_version
rfs_common_load_rfs_s3_config
rfs_common_build_s3_store_uri
rfs_common_locate_rfs
rfs_common_locate_modules_dir "${FULL_KERNEL_VERSION}"
rfs_common_validate_modules_metadata
rfs_common_locate_firmware_dir
log_info "All checks passed."
log_info "FULL_KERNEL_VERSION=${FULL_KERNEL_VERSION}"
log_info "RFS_S3_STORE_URI=${RFS_S3_STORE_URI}"
log_info "MODULES_DIR=${MODULES_DIR}"
log_info "FIRMWARE_DIR=${FIRMWARE_DIR}"
log_info "RFS_BIN=${RFS_BIN}"
fi

79
scripts/rfs/pack-firmware.sh Executable file
View File

@@ -0,0 +1,79 @@
#!/bin/bash
# Pack firmware tree into an RFS flist and patch manifest stores for Garage web endpoint.
# - Computes FULL_KERNEL_VERSION from configs (not strictly needed for firmware, but kept uniform)
# - Selects firmware directory with priority:
# 1) $PROJECT_ROOT/firmware
# 2) $PROJECT_ROOT/initramfs/lib/firmware
# 3) /lib/firmware
# - Manifest name: firmware-<FIRMWARE_TAG or YYYYMMDD>.fl
# - Uploads blobs to S3 (Garage) via rfs store URI
# - Patches .fl sqlite stores table to use WEB_ENDPOINT for read-only fetches
# - Optionally uploads the .fl manifest to S3 manifests/ using aws CLI
set -euo pipefail
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${HERE}/common.sh"
section() { echo -e "\n==== $* ====\n"; }
section "Loading configuration (kernel + RFS S3) and locating rfs"
# Kernel version is computed for consistency/logging (not required to pack firmware)
rfs_common_load_build_kernel_version
rfs_common_load_rfs_s3_config
rfs_common_build_s3_store_uri
rfs_common_locate_rfs
section "Locating firmware directory"
rfs_common_locate_firmware_dir
TAG="$(rfs_common_firmware_tag)"
MANIFEST_NAME="firmware-${TAG}.fl"
MANIFEST_PATH="$(rfs_common_prepare_output "${MANIFEST_NAME}")"
section "Packing firmware to flist"
log_info "Firmware dir: ${FIRMWARE_DIR}"
log_info "rfs pack -m ${MANIFEST_PATH} -s ${RFS_S3_STORE_URI} ${FIRMWARE_DIR}"
safe_execute "${RFS_BIN}" pack -m "${MANIFEST_PATH}" -s "${RFS_S3_STORE_URI}" "${FIRMWARE_DIR}"
# Patch manifest route URL to include read-only S3 credentials (Garage)
section "Updating route URL in manifest to include read-only S3 credentials"
rfs_common_build_route_url
rfs_common_patch_flist_route_url "${MANIFEST_PATH}"
# Patch manifest stores to HTTPS web endpoint if provided
if [[ -n "${WEB_ENDPOINT:-}" ]]; then
section "Patching manifest stores to HTTPS web endpoint"
log_info "Patching ${MANIFEST_PATH} stores to: ${WEB_ENDPOINT} (keep_s3_fallback=${KEEP_S3_FALLBACK:-false})"
rfs_common_patch_flist_stores "${MANIFEST_PATH}" "${WEB_ENDPOINT}" "${KEEP_S3_FALLBACK:-false}"
else
log_warn "WEB_ENDPOINT not set in config; manifest will reference only s3:// store"
fi
# Optional: upload .fl manifest to Garage via MinIO Client (separate from blobs)
if [[ "${UPLOAD_MANIFESTS:-false}" == "true" ]]; then
section "Uploading manifest .fl via MinIO Client to S3 manifests/"
# Support both mcli (new) and mc (legacy) binaries
if command -v mcli >/dev/null 2>&1; then
MCLI_BIN="mcli"
elif command -v mc >/dev/null 2>&1; then
MCLI_BIN="mc"
else
log_warn "MinIO Client not found (expected mcli or mc); skipping manifest upload"
MCLI_BIN=""
fi
if [[ -n "${MCLI_BIN}" ]]; then
local_subpath="${MANIFESTS_SUBPATH:-manifests}"
# Configure alias and upload using MinIO client
safe_execute "${MCLI_BIN}" alias set rfs "${S3_ENDPOINT}" "${S3_ACCESS_KEY}" "${S3_SECRET_KEY}"
mcli_dst="rfs/${S3_BUCKET}/${S3_PREFIX%/}/${local_subpath%/}/${MANIFEST_NAME}"
log_info "${MCLI_BIN} cp ${MANIFEST_PATH} ${mcli_dst}"
safe_execute "${MCLI_BIN}" cp "${MANIFEST_PATH}" "${mcli_dst}"
fi
else
log_info "UPLOAD_MANIFESTS=false; skipping manifest upload"
fi
section "Done"
log_info "Manifest: ${MANIFEST_PATH}"

73
scripts/rfs/pack-modules.sh Executable file
View File

@@ -0,0 +1,73 @@
#!/bin/bash
# Pack kernel modules into an RFS flist and patch manifest stores for Garage web endpoint.
# - Computes FULL_KERNEL_VERSION from configs (never uses uname -r)
# - Packs /lib/modules/<FULL_KERNEL_VERSION> (or fallback paths) to dist/flists/modules-<FULL_KERNEL_VERSION>.fl
# - Uploads blobs to S3 (Garage) via rfs store URI
# - Patches .fl sqlite stores table to use WEB_ENDPOINT for read-only fetches
# - Optionally uploads the .fl manifest to S3 manifests/ using aws CLI
set -euo pipefail
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${HERE}/common.sh"
section() { echo -e "\n==== $* ====\n"; }
section "Loading configuration and computing kernel version"
rfs_common_load_build_kernel_version
rfs_common_load_rfs_s3_config
rfs_common_build_s3_store_uri
rfs_common_locate_rfs
section "Locating modules directory for ${FULL_KERNEL_VERSION}"
rfs_common_locate_modules_dir "${FULL_KERNEL_VERSION}"
rfs_common_validate_modules_metadata
MANIFEST_NAME="modules-${FULL_KERNEL_VERSION}.fl"
MANIFEST_PATH="$(rfs_common_prepare_output "${MANIFEST_NAME}")"
section "Packing modules to flist"
log_info "rfs pack -m ${MANIFEST_PATH} -s ${RFS_S3_STORE_URI} ${MODULES_DIR}"
safe_execute "${RFS_BIN}" pack -m "${MANIFEST_PATH}" -s "${RFS_S3_STORE_URI}" "${MODULES_DIR}"
# Patch manifest route URL to include read-only S3 credentials (Garage)
section "Updating route URL in manifest to include read-only S3 credentials"
rfs_common_build_route_url
rfs_common_patch_flist_route_url "${MANIFEST_PATH}"
# Patch manifest stores to HTTPS web endpoint if provided
if [[ -n "${WEB_ENDPOINT:-}" ]]; then
section "Patching manifest stores to HTTPS web endpoint"
log_info "Patching ${MANIFEST_PATH} stores to: ${WEB_ENDPOINT} (keep_s3_fallback=${KEEP_S3_FALLBACK:-false})"
rfs_common_patch_flist_stores "${MANIFEST_PATH}" "${WEB_ENDPOINT}" "${KEEP_S3_FALLBACK:-false}"
else
log_warn "WEB_ENDPOINT not set in config; manifest will reference only s3:// store"
fi
# Optional: upload .fl manifest to Garage via MinIO Client (separate from blobs)
if [[ "${UPLOAD_MANIFESTS:-false}" == "true" ]]; then
section "Uploading manifest .fl via MinIO Client to S3 manifests/"
# Support both mcli (new) and mc (legacy) binaries
if command -v mcli >/dev/null 2>&1; then
MCLI_BIN="mcli"
elif command -v mc >/dev/null 2>&1; then
MCLI_BIN="mc"
else
log_warn "MinIO Client not found (expected mcli or mc); skipping manifest upload"
MCLI_BIN=""
fi
if [[ -n "${MCLI_BIN}" ]]; then
local_subpath="${MANIFESTS_SUBPATH:-manifests}"
# Configure alias and upload using MinIO client
safe_execute "${MCLI_BIN}" alias set rfs "${S3_ENDPOINT}" "${S3_ACCESS_KEY}" "${S3_SECRET_KEY}"
mcli_dst="rfs/${S3_BUCKET}/${S3_PREFIX%/}/${local_subpath%/}/${MANIFEST_NAME}"
log_info "${MCLI_BIN} cp ${MANIFEST_PATH} ${mcli_dst}"
safe_execute "${MCLI_BIN}" cp "${MANIFEST_PATH}" "${mcli_dst}"
fi
else
log_info "UPLOAD_MANIFESTS=false; skipping manifest upload"
fi
section "Done"
log_info "Manifest: ${MANIFEST_PATH}"

24
scripts/rfs/patch-stores.sh Executable file
View File

@@ -0,0 +1,24 @@
#!/bin/bash
# Wrapper to patch an .fl manifest's stores to use an HTTPS web endpoint.
# Usage:
# ./scripts/rfs/patch-stores.sh dist/flists/modules-6.12.44-Zero-OS.fl https://hub.grid.tf/zos/zosbuilder/store [keep_s3_fallback]
#
# keep_s3_fallback: "true" to keep existing s3:// store rows as fallback; default "false"
set -euo pipefail
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${HERE}/common.sh"
if [[ $# -lt 2 ]]; then
echo "Usage: $0 /path/to/file.fl https://web.endpoint/base [keep_s3_fallback]" >&2
exit 1
fi
FL="$1"
WEB="$2"
KEEP="${3:-false}"
rfs_common_patch_flist_stores "${FL}" "${WEB}" "${KEEP}"
echo "[INFO] Patched stores in: ${FL}"

110
scripts/rfs/verify-flist.sh Executable file
View File

@@ -0,0 +1,110 @@
#!/bin/bash
# Verify an RFS flist manifest: inspect, tree, and optional mount test (best-effort)
# This script is safe to run on developer machines; mount test may require root and FUSE policy.
set -euo pipefail
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${HERE}/common.sh"
usage() {
cat <<'EOF'
Usage: scripts/rfs/verify-flist.sh -m path/to/manifest.fl [--tree] [--mount] [--mountpoint DIR]
Actions:
- Inspect manifest metadata (always)
- Tree view of entries (--tree)
- Optional mount test (--mount) to a temporary or given mountpoint
Notes:
- Mount test typically requires root and proper FUSE configuration.
- On success, a quick directory listing is shown, then the mount is unmounted.
EOF
}
section() { echo -e "\n==== $* ====\n"; }
MANIFEST=""
DO_TREE=0
DO_MOUNT=0
MOUNTPOINT=""
# Parse args
if [[ $# -eq 0 ]]; then
usage
exit 1
fi
while [[ $# -gt 0 ]]; do
case "$1" in
-m|--manifest)
MANIFEST="${2:-}"; shift 2;;
--tree)
DO_TREE=1; shift;;
--mount)
DO_MOUNT=1; shift;;
--mountpoint)
MOUNTPOINT="${2:-}"; shift 2;;
-h|--help)
usage; exit 0;;
*)
echo "Unknown argument: $1" >&2; usage; exit 1;;
esac
done
if [[ -z "${MANIFEST}" || ! -f "${MANIFEST}" ]]; then
log_error "Manifest not found: ${MANIFEST:-<empty>}"
exit 1
fi
# Ensure rfs binary is available
rfs_common_locate_rfs
section "Inspecting manifest"
safe_execute "${RFS_BIN}" flist inspect -m "${MANIFEST}" || {
log_warn "rfs flist inspect failed (old rfs?)"
log_warn "Try: ${RFS_BIN} inspect ${MANIFEST}"
}
if [[ ${DO_TREE} -eq 1 ]]; then
section "Listing manifest tree"
safe_execute "${RFS_BIN}" flist tree -m "${MANIFEST}" 2>/dev/null || {
log_warn "rfs flist tree failed; attempting fallback 'tree'"
safe_execute "${RFS_BIN}" tree -m "${MANIFEST}" || true
}
fi
if [[ ${DO_MOUNT} -eq 1 ]]; then
section "Mount test"
if [[ "$(id -u)" -ne 0 ]]; then
log_warn "Mount test skipped: requires root (uid 0)"
else
# Decide mountpoint
local_mp_created=0
if [[ -z "${MOUNTPOINT}" ]]; then
MOUNTPOINT="$(mktemp -d /tmp/rfs-mnt.XXXXXX)"
local_mp_created=1
else
mkdir -p "${MOUNTPOINT}"
fi
log_info "Mounting ${MANIFEST} -> ${MOUNTPOINT}"
set +e
# Best-effort background mount
(setsid "${RFS_BIN}" mount -m "${MANIFEST}" "${MOUNTPOINT}" >/tmp/rfs-mount.log 2>&1) &
mpid=$!
sleep 2
ls -la "${MOUNTPOINT}" | head -n 50 || true
# Try to unmount and stop background process
umount "${MOUNTPOINT}" 2>/dev/null || fusermount -u "${MOUNTPOINT}" 2>/dev/null || true
kill "${mpid}" 2>/dev/null || true
set -e
if [[ ${local_mp_created} -eq 1 ]]; then
rmdir "${MOUNTPOINT}" 2>/dev/null || true
fi
log_info "Mount test done (see /tmp/rfs-mount.log if issues)"
fi
fi
section "Verify complete"
log_info "Manifest: ${MANIFEST}"