Compare commits

...

34 Commits

Author SHA1 Message Date
947d156921 Added youki build and fromatting of scripts
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-11-11 20:49:36 +01:00
721e26a855 build: remove testing.sh in favor of runit.sh; add claude.md reference
Replace inline boot testing with standalone runit.sh runner for clarity:
- Remove scripts/lib/testing.sh source and boot_tests stage from build.sh
- Remove --skip-tests option from build.sh and rebuild-after-zinit.sh
- Update all docs to reference runit.sh for QEMU/cloud-hypervisor testing
- Add comprehensive claude.md as AI assistant entry point with guidelines

Testing is now fully decoupled from build pipeline; use ./runit.sh for
QEMU/cloud-hypervisor validation after builds complete.
2025-11-04 13:47:24 +01:00
334821dacf Integrate zosstorage build path and runtime orchestration
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
Summary:

* add openssh-client to the builder image and mount host SSH keys into the dev container when available

* switch RFS to git builds, register the zosstorage source, and document the extra Rust component

* wire zosstorage into the build: add build_zosstorage(), ship the binary in the initramfs, and extend component validation

* refresh kernel configuration to 6.12.49 while dropping Xen guest selections and enabling counted-by support

* tighten runtime configs: use cached mycelium key path, add zosstorage zinit unit, bootstrap ovsdb-server, and enable openvswitch module

* adjust the network health check ping invocation and fix the RFS pack-tree --debug flag order

* update NOTES changelog, README component list, and introduce a runit helper for qemu/cloud-hypervisor testing

* add ovsdb init script wiring under config/zinit/init and ensure zosstorage is available before mycelium
2025-10-14 17:47:13 +02:00
cf05e0ca5b rfs: add pack-tree.sh to pack arbitrary directory to flist using config/rfs.conf; enable --debug on rfs pack for verbose diagnostics
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-10-02 17:41:16 +02:00
883ffcf734 components: read config/sources.conf to determine components, versions, and build funcs; remove hardcoded list. verification: accept rfs built or prebuilt binary paths. 2025-10-02 17:13:35 +02:00
818f5037f4 docs(TODO): use relative links from docs/ to ../scripts and ../config so links work in GitHub and VS Code 2025-10-02 11:44:41 +02:00
d5e9bf2d9a docs: add persistent TODO.md checklist with clickable references to code and configs 2025-10-01 18:09:28 +02:00
10ba31acb4 docs: regenerate scripts/functionlist.md; refresh NOTES with jump-points and roadmap; extend rfs-flists with RESP backend design. config: add RESP placeholders to rfs.conf.example. components: keep previous non-destructive git clone logic. 2025-10-01 18:06:13 +02:00
6193d241ea components: reuse existing git tree in components_download_git; config: update packages.list 2025-10-01 17:47:51 +02:00
4ca68ac0f7 Configuration changes:
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
- kernel config changes
- kernel version bump
- added sgdisk to initramfs packages for zosstorage to work
2025-09-30 14:41:49 +02:00
404e421411 toolfixes
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-09-25 11:49:12 +02:00
d529d53827 Update README.md
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-09-24 08:10:52 +00:00
2142876e3d Fixes:
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
- no getty on serial (console if specified)
  - zinit sequence
2025-09-24 01:42:18 +02:00
c70143acb8 Some fixes
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
- added piciutils
  - zinit sequence fixes
2025-09-24 00:37:20 +02:00
ad0a06e267 initramfs+modules: robust copy aliasing, curated stage1 + PHYs, firmware policy via firmware.conf, runtime readiness, build ID; docs sync
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
Summary of changes (with references):\n\nModules + PHY coverage\n- Curated and normalized stage1 list in [config.modules.conf](config/modules.conf:1):\n  - Boot-critical storage, core virtio, common NICs (Intel/Realtek/Broadcom), overlay/fuse, USB HCD/HID.\n  - Added PHY drivers required by NIC MACs:\n    * realtek (for r8169, etc.)\n    * broadcom families: broadcom, bcm7xxx, bcm87xx, bcm_phy_lib, bcm_phy_ptp\n- Robust underscore↔hyphen aliasing during copy so e.g. xhci_pci → xhci-pci.ko, hid_generic → hid-generic.ko:\n  - [bash.initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:990)\n\nFirmware policy and coverage\n- Firmware selection now authoritative via [config/firmware.conf](config/firmware.conf:1); ignore modules.conf firmware hints:\n  - [bash.initramfs_setup_modules()](scripts/lib/initramfs.sh:229)\n  - Count from firmware.conf for reporting; remove stale required-firmware.list.\n- Expanded NIC firmware set (bnx2, bnx2x, tigon, intel, realtek, rtl_nic, qlogic, e100) in [config.firmware.conf](config/firmware.conf:1).\n- Installer enforces firmware.conf source-of-truth in [bash.alpine_install_firmware()](scripts/lib/alpine.sh:392).\n\nEarly input & build freshness\n- Write a runtime build stamp to /etc/zero-os-build-id for embedded initramfs verification:\n  - [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:568)\n- Minor init refinements in [config.init](config/init:1) (ensures /home, consistent depmod path).\n\nRebuild helper improvements\n- [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh:1):\n  - Added --verify-only; container-aware execution; selective marker clears only.\n  - Prints stage status before/after; avoids --rebuild-from; resolves full kernel version for diagnostics.\n\nRemote flist readiness + zinit\n- Init scripts now probe BASE_URL readiness and accept FLISTS_BASE_URL/FLIST_BASE_URL; firmware target is /lib/firmware:\n  - [sh.firmware.sh](config/zinit/init/firmware.sh:1)\n  - [sh.modules.sh](config/zinit/init/modules.sh:1)\n\nContainer, docs, and utilities\n- Stream container build logs by calling runtime build directly in [bash.docker_build_container()](scripts/lib/docker.sh:56).\n- Docs updated to reflect firmware policy, runtime readiness, rebuild helper, early input, and GRUB USB:\n  - [docs.NOTES.md](docs/NOTES.md)\n  - [docs.PROMPT.md](docs/PROMPT.md)\n  - [docs.review-rfs-integration.md](docs/review-rfs-integration.md)\n- Added GRUB USB creator (referenced in docs): [scripts/make-grub-usb.sh](scripts/make-grub-usb.sh)\n\nCleanup\n- Removed legacy/duplicated config trees under configs/ and config/zinit.old/.\n- Minor newline and ignore fixes: [.gitignore](.gitignore:1)\n\nNet effect\n- Runtime now has correct USB HCDs/HID-generic and NIC+PHY coverage (Realtek/Broadcom), with matching firmware installed in initramfs.\n- Rebuild workflow is minimal and host/container-aware; docs are aligned with implemented behavior.\n
2025-09-23 14:03:01 +02:00
2fba2bd4cd initramfs+kernel: path anchors, helper, and init debug hook
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
initramfs: anchor relative paths to PROJECT_ROOT in [bash.initramfs_validate()](scripts/lib/initramfs.sh:799) and [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688) to avoid CWD drift. Add diagnostics logs.

kernel: anchor kernel output path to PROJECT_ROOT in [bash.kernel_build_with_initramfs()](scripts/lib/kernel.sh:174) to ensure dist/vmlinuz.efi is under PROJECT_ROOT/dist.

helper: add [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh) to incrementally rebuild after zinit/modules.conf/init changes. Default: initramfs-only (recreates cpio). Flags: --with-kernel, --refresh-container-mods, --run-tests. Uses --rebuild-from=initramfs_create when rebuilding kernel.

init: add early debug shell on kernel param initdebug=true; prefer /init-debug when present else spawn /bin/sh -l. See [config/init](config/init:1).

modules(stage1): add USB keyboard support (HID + host controllers) in [config/modules.conf](config/modules.conf:1): usbhid, hid_generic, hid, xhci/ehci/ohci/uhci.
2025-09-20 16:11:44 +02:00
310e11d2bf rfs(firmware): pack full Alpine linux-firmware set from container and overmount /lib/firmware
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
Change pack source: install all linux-firmware* packages in container and pack from /lib/firmware via [bash.rfs_common_install_all_alpine_firmware_packages()](scripts/rfs/common.sh:290) used by [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:21). At runtime, overmount firmware flist on /lib/firmware by updating [sh.firmware.sh](config/zinit/init/firmware.sh:10). Update docs to reflect /lib/firmware mount and new pack strategy.
2025-09-19 08:27:10 +02:00
79ed723303 Notes.md, absolute path normalizing
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
- edit NOTES.md for updating line numbers
- add check for using normalized path in initramfs normalization
2025-09-18 21:45:21 +02:00
d649b7e6bf docker: add perl back to builder image
rfs build depends on perl; removing it caused failures. Ensure perl is installed in the container image to restore rfs functionality.
2025-09-18 18:29:32 +02:00
4f67ea488f docs: add PROMPT.md – on-repo prompt for debugging, build, and ops with function/file jump-points
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-09-18 16:18:06 +02:00
815f695ad3 Some adds:
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
- realtek Phy list module
  - console agetty
2025-09-09 22:01:30 +02:00
fe8c48a862 sync: apply remote flist fallback, passwordless root finalize, path normalization, INITRAMFS_ARCHIVE guard, /home ensure, and notes
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-09-09 21:24:28 +02:00
16955ea84f build: guard INITRAMFS_ARCHIVE in stage_kernel_build for incremental runs
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
initramfs: ensure essential dirs incl. /home exist during finalize and validate 'home' as essential item
2025-09-09 17:00:38 +02:00
998e40c2e5 zinit(init): remote flist fallback from zos.grid.tf when local manifests are missing
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
firmware.sh: if no local firmware-latest.fl, fetch https://zos.grid.tf/store/flists/firmware-latest.fl using wget or busybox wget; then mount via rfs. modules.sh: if no local modules-6.16.5-arch1-1.fl, fetch https://zos.grid.tf/store/flists/modules-6.16.5-arch1-1-Zero-OS.fl using wget or busybox wget; then mount via rfs. Keep env overrides MODULES_FLIST/FIRMWARE_FLIST and RFS_BIN semantics.
2025-09-09 16:23:09 +02:00
0db55fdc6e docs: add comprehensive repository map and operational notes (build flow, branding passwordless policy, path normalization, container tools)
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-09-09 16:17:10 +02:00
c10580d171 branding: enforce passwordless root via passwd -d -R; remove direct passwd/shadow edits
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
initramfs: switch to passwd -d -R in scripts/lib/initramfs.sh:initramfs_finalize_customization() for shadow-aware passwordless root (aligned with 9423b708 intent), drop sed and chpasswd paths, and add validation diagnostics. common: normalize INSTALL_DIR/COMPONENTS_DIR/KERNEL_DIR/DIST_DIR to absolute paths after sourcing config to prevent validation resolving under kernel/current. Dockerfile: include shadow (for passwd/chpasswd), ensure openssl and openssl-dev present; remove perl. config: introduce ZEROOS_PASSWORDLESS_ROOT default true and comment password vars. docs: NOTES.md updated with diagnostics and flow.
2025-09-09 13:59:44 +02:00
e70a35ddc8 build: ensure stable container CWD to PROJECT_ROOT before stages
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
• Normalize CWD inside container to PROJECT_ROOT to prevent relative path issues in validation and downstream stages via [bash.setup_build_environment()](scripts/build.sh:133)

• Complements earlier hardening in [bash.initramfs_validate()](scripts/lib/initramfs.sh:774) that resolves absolute paths and checks existence
2025-09-09 11:48:17 +02:00
6090ce57da initramfs_validate: resolve path and harden existence check
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
• Resolve input dir to absolute with resolve_path and perform early -d check in [bash.initramfs_validate()](scripts/lib/initramfs.sh:774) to avoid safe_execute aborts on missing paths

• Use plain ls/find logging for sanity snapshot (not safe_execute) so validation reports context even if directory is absent
2025-09-09 11:46:59 +02:00
8465f00590 initramfs: fix rootless perms for etc/zinit and add diagnostics
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
• Ensure host/rootless traversal for zinit configs: make etc/zinit and etc/zinit/init 755 prior to recursive normalization; then set dirs=755, files=644, and mark *.sh executable in [bash.initramfs_setup_zinit()](scripts/lib/initramfs.sh:12)

• Add pre-CPIO sanity logs to catch empty/mis-scoped archives: top-level ls, file count, and essential presence checks in [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:658)

• Add validation-time sanity snapshot of top-level and entry count in [bash.initramfs_validate()](scripts/lib/initramfs.sh:754)
2025-09-09 11:32:08 +02:00
ae5eea5b2f build/initramfs/rfs: stabilize paths, tests; add branding guard; ntp robustness
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
• rfs_flists: normalize CWD to PROJECT_ROOT; invoke packers via absolute paths (fix relative lookup under kernel/current)

• initramfs_create_cpio: redirect to absolute output path; add explicit customization verification logs

• initramfs_test: default INITRAMFS_ARCHIVE to absolute dist/initramfs.cpio.xz when stage is invoked directly

• branding: guard motd/issue/password edits behind ZEROOS_BRANDING (or ZEROOS_REBRANDING) with default disabled; do not touch files unless enabled

• ntp: write /etc/ntp.conf only if absent; symlink ntpd.conf; runtime ntpd.sh parses kernel ntp= and falls back to Google NTP

• docs/config: add commented ZEROOS_BRANDING/REBRANDING examples to config/build.conf
2025-09-09 10:36:30 +02:00
36190f6704 initramfs: use /etc/ntp.conf (with ntpd.conf symlink), fix CPIO redirection, add customization logs
• scripts/lib/initramfs.sh: write /etc/ntp.conf, symlink ntpd.conf if absent; compute absolute output path before cd so cpio|xz redirection works; emit verification logs around initramfs_finalize_customization()

• config/zinit/init/ntpd.sh: robust parsing of kernel ntp=, safe defaults, and launch BusyBox ntpd with -p servers
2025-09-09 09:41:34 +02:00
9aecfe26ac zinit: stabilize ntp/network/getty runtime
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
• ntp: robust /etc/ntp.conf symlink, safe defaults, avoid nounset, keep BusyBox CLI -p usage

• network: wrap dhcpcd to create dhcpcd user/group if missing; run as root if needed

• console: set getty console to 115200 vt100
2025-09-08 23:54:14 +02:00
652d38abb1 build/rfs: integrate RFS flists + runtime orchestration
• Add standalone RFS tooling: scripts/rfs/common.sh, pack-modules.sh, pack-firmware.sh, verify-flist.sh

• Patch flist route.url with read-only Garage S3 credentials; optional HTTPS store row; optional manifest upload via mcli

• Build integration: stage_rfs_flists in scripts/build.sh to pack and embed manifests under initramfs/etc/rfs

• Runtime: add zinit units rfs-modules (after: network), rfs-firmware (after: network) as daemons; add udev-rfs oneshot post-mount

• Keep early udev-trigger oneshot to coldplug NICs before RFS mounts

• Firmware flist reproducible naming: respect FIRMWARE_TAG from env or config/build.conf, default to latest

• Docs: update docs/rfs-flists.md with runtime ordering, reproducible tagging, verification steps
2025-09-08 23:39:20 +02:00
afd4f4c6f9 feat(rfs): flist pack to S3 + read-only route embedding + zinit mount scripts; docs; dev-container tooling
Summary
- Implemented plain S3-only flist workflow (no web endpoint). rfs pack uploads blobs using write creds; flist route.url is patched to embed read-only S3 credentials so rfs mount reads directly from S3.

Changes
1) New RFS tooling (scripts/rfs/)
   - common.sh:
     - Compute FULL_KERNEL_VERSION from configs (no uname).
     - Load S3 config and construct pack store URI.
     - Build read-only S3 route URL and patch flist (sqlite).
     - Helpers to locate modules/firmware trees and rfs binary.
   - pack-modules.sh:
     - Pack /lib/modules/<FULL_KERNEL_VERSION> to dist/flists/modules-<FULL_KERNEL_VERSION>.fl
     - Patch flist route to s3://READ:READ@host:port/ROUTE_PATH?region=ROUTE_REGION (default /blobs, garage).
     - Optional upload of .fl using MinIO client (mcli/mc).
   - pack-firmware.sh:
     - Source firmware from $PROJECT_ROOT/firmware (fallback to initramfs/lib/firmware).
     - Pack to dist/flists/firmware-<TAG_OR_DATE>.fl (FIRMWARE_TAG or YYYYMMDD).
     - Patch flist route to read-only S3; optional .fl upload via mcli/mc.
   - verify-flist.sh:
     - rfs flist inspect/tree; optional mount test (best effort).
   - patch-stores.sh:
     - Helper to patch stores (kept though not used by default).

2) Dev-container (Dockerfile)
   - Added sqlite and MinIO client package for manifest patching/upload (expect mcli binary at runtime; scripts support both mcli/mc).
   - Retains rustup and musl target for building rfs/zinit/mycelium.

3) Config and examples
   - config/rfs.conf.example:
     - S3_ENDPOINT/S3_REGION/S3_BUCKET/S3_PREFIX
     - S3_ACCESS_KEY/S3_SECRET_KEY (write)
     - READ_ACCESS_KEY/READ_SECRET_KEY (read-only)
     - ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
     - MANIFESTS_SUBPATH, UPLOAD_MANIFESTS (mcli upload optional)
   - config/rfs.conf updated by user with real values (not committed here; example included).
   - config/modules.conf minor tweak (staged).

4) Zinit mount scripts (config/zinit/init/)
   - firmware.sh:
     - Mounts firmware-latest.fl over /usr/lib/firmware using rfs mount (env override FIRMWARE_FLIST supported).
   - modules.sh:
     - Mounts modules-$(uname -r).fl over /lib/modules/$(uname -r) (env override MODULES_FLIST supported).
   - Both skip if target already mounted and respect RFS_BIN env.

5) Documentation
   - docs/rfs-flists.md:
     - End-to-end flow, S3-only route URL patching, mcli upload notes.
   - docs/review-rfs-integration.md:
     - Integration points, build flow, and post-build standalone usage.
   - docs/depmod-behavior.md:
     - depmod reads .modinfo; recommend prebuilt modules.*(.bin); use depmod -A only on mismatch.

6) Utility
   - scripts/functionlist.md synced with current functions.

Behavioral details
- Pack (write):
  s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=REGION
- Flist route (read, post-patch):
  s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION
  Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage; ROUTE_ENDPOINT derived from S3_ENDPOINT if not set.

Runtime mount examples
- Modules:
  rfs mount -m dist/flists/modules-6.12.44-Zero-OS.fl /lib/modules/6.12.44-Zero-OS
- Firmware:
  rfs mount -m dist/flists/firmware-YYYYMMDD.fl /usr/lib/firmware

Notes
- FUSE policy: If "allow_other" error occurs, enable user_allow_other in /etc/fuse.conf or run mounts as root.
- WEB_ENDPOINT rewrite is disabled by default (set WEB_ENDPOINT=""). Plain S3 route is embedded in flists.
- MinIO client binary in dev-container is mcli; scripts support mcli (preferred) and mc (fallback).

Files added/modified
- Added: scripts/rfs/{common.sh,pack-modules.sh,pack-firmware.sh,verify-flist.sh,patch-stores.sh}
- Added: config/zinit/init/{firmware.sh,modules.sh}
- Added: docs/{rfs-flists.md,review-rfs-integration.md,depmod-behavior.md}
- Added: config/rfs.conf.example
- Modified: Dockerfile, scripts/functionlist.md, config/modules.conf, config/zinit/sshd-setup.yaml, .gitignore
2025-09-08 22:51:53 +02:00
1141 changed files with 5981 additions and 347143 deletions

6
.gitignore vendored
View File

@@ -38,3 +38,9 @@ linux-*.tar.xz
# OS generated files
.DS_Store
Thumbs.db
# files containing secrets
config/rfs.conf
# volumes created in root
zosvol*

33
AGENTS.md Normal file
View File

@@ -0,0 +1,33 @@
# AGENTS.md
This file provides guidance to agents when working with code in this repository.
Non-obvious build/run facts (read from scripts):
- Always build inside the containerized toolchain; ./scripts/build.sh will spawn a transient container, or use the persistent dev container via ./scripts/dev-container.sh start then ./scripts/dev-container.sh build.
- The incremental pipeline is stage-driven with .build-stages markers; list status via ./scripts/build.sh --show-stages and force a subset with --rebuild-from=<stage>.
- Outputs are anchored to PROJECT_ROOT (normalized in [bash.common.sh](scripts/lib/common.sh:236)):
- Kernel: dist/vmlinuz.efi (created by [bash.kernel_build_with_initramfs()](scripts/lib/kernel.sh:174))
- Initramfs archive: dist/initramfs.cpio.xz (created by [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688))
- Kernel embeds the initramfs (CONFIG_INITRAMFS_SOURCE set by [bash.kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:130)); no separate initrd is required in normal flow.
Fast iteration helpers (init/zinit/modules.conf):
- Use ./scripts/rebuild-after-zinit.sh to re-copy /init, re-apply zinit, re-resolve/copy modules, and recreate the cpio (initramfs-only by default).
- Flags: --with-kernel (also re-embed kernel; forces --rebuild-from=initramfs_create), --refresh-container-mods (rebuild container /lib/modules if missing), --verify-only (report changes since last cpio).
- DEBUG=1 shows full safe_execute logs and stage timings.
Critical conventions (avoid breakage):
- Use logging/safety helpers from [bash.common.sh](scripts/lib/common.sh:1): log_info/warn/error/debug, safe_execute, section_header.
- Paths must be anchored to PROJECT_ROOT (already normalized after sourcing config) to avoid CWD drift (kernel builds cd into kernel/current).
- Do not edit /etc/shadow directly; passwordless root is applied by chroot ${initramfs_dir} passwd -d root in [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575).
- initdebug=true on kernel cmdline opens an early shell from [config/init](config/init) even without /init-debug.
Modules and firmware (non-obvious flow):
- Container /lib/modules/<FULL_VERSION> is the authoritative source for dependency resolution and copying into initramfs (ensure kernel_modules stage ran at least once).
- RFS firmware pack now installs all linux-firmware* into container and packs from /lib/firmware; runtime overmount targets /lib/firmware (not /usr/lib/firmware).
Troubleshooting gotchas:
- If “Initramfs directory not found: initramfs” or kernel output in wrong place, path anchoring patches exist in [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688), [bash.initramfs_validate()](scripts/lib/initramfs.sh:799), and [bash.kernel_build_with_initramfs()](scripts/lib/kernel.sh:174); ensure DEBUG logs show “... anchored to PROJECT_ROOT”.
- On host, the rebuild helper delegates to ./scripts/dev-container.sh so you dont need to manually start the container.
Utilities:
- Create BIOS+UEFI USB with embedded-initramfs kernel: sudo ./scripts/make-grub-usb.sh /dev/sdX --kparams "console=ttyS0 initdebug=true" (use --with-initrd only if you want a separate initrd on ESP).

View File

@@ -7,6 +7,7 @@ RUN apk add --no-cache \
rustup \
upx \
git \
openssh-client \
wget \
curl \
tar \
@@ -18,8 +19,10 @@ RUN apk add --no-cache \
musl-dev \
musl-utils \
pkgconfig \
openssl-dev \
openssl openssl-dev \
libseccomp libseccomp-dev \
perl \
shadow \
bash \
findutils \
grep \
@@ -32,6 +35,8 @@ RUN apk add --no-cache \
elfutils-dev \
ncurses-dev \
kmod \
sqlite \
minio-client \
pahole
# Setup rustup with stable and musl target
@@ -50,6 +55,5 @@ RUN chown builder:builder /workspace
# Set environment variables - rustup handles everything
ENV PATH="/root/.cargo/bin:${PATH}"
ENV RUSTFLAGS="-C target-feature=+crt-static"
CMD ["/bin/bash"]

132
README.md
View File

@@ -7,7 +7,7 @@ A comprehensive build system for creating custom Alpine Linux 3.22 x86_64 initra
- **Alpine Linux 3.22** miniroot as base system
- **zinit** process manager (complete OpenRC replacement)
- **Rootless containers** (Docker/Podman compatible)
- **Rust components** with musl targeting (zinit, rfs, mycelium)
- **Rust components** with musl targeting (zinit, rfs, mycelium, zosstorage)
- **Aggressive optimization** (strip + UPX compression)
- **2-stage module loading** for hardware support
- **GitHub Actions** compatible build pipeline
@@ -103,8 +103,7 @@ zosbuilder/
│ │ ├── alpine.sh # Alpine operations
│ │ ├── components.sh # source building
│ │ ├── initramfs.sh # assembly & optimization
│ │ ── kernel.sh # kernel building
│ │ └── testing.sh # QEMU/cloud-hypervisor
│ │ ── kernel.sh # kernel building
│ ├── build.sh # main orchestrator
│ └── clean.sh # cleanup script
├── initramfs/ # build output (generated)
@@ -222,6 +221,7 @@ Services are migrated from existing `configs/zinit/` directory with proper initi
- **zinit**: Standard cargo build
- **rfs**: Standard cargo build
- **mycelium**: Build in `myceliumd/` subdirectory
- **zosstorage**: Build from the storage orchestration component for Zero-OS
4. Install binaries to initramfs
### Phase 4: System Configuration
@@ -243,34 +243,40 @@ Services are migrated from existing `configs/zinit/` directory with proper initi
### Phase 6: Packaging
1. Create `initramfs.cpio.xz` with XZ compression
2. Build kernel with embedded initramfs
3. Generate `vmlinuz.efi`
3. Generate `vmlinuz.efi` (default kernel)
4. Generate versioned kernel: `vmlinuz-{VERSION}-{ZINIT_HASH}.efi`
5. Optionally upload versioned kernel to S3 (set `UPLOAD_KERNEL=true`)
## Testing
### QEMU Testing
```bash
# Boot test with QEMU
./scripts/test.sh --qemu
# Boot test with QEMU (default)
./runit.sh
# With serial console
./scripts/test.sh --qemu --serial
# With custom parameters
./runit.sh --hypervisor qemu --memory 2048 --disks 3
```
### cloud-hypervisor Testing
```bash
# Boot test with cloud-hypervisor
./scripts/test.sh --cloud-hypervisor
./runit.sh --hypervisor ch
# With disk reset
./runit.sh --hypervisor ch --reset --disks 5
```
### Custom Testing
### Advanced Options
```bash
# Manual QEMU command
qemu-system-x86_64 \
-kernel dist/vmlinuz.efi \
-m 512M \
-nographic \
-serial mon:stdio \
-append "console=ttyS0,115200 console=tty1 loglevel=7"
# See all options
./runit.sh --help
# Custom disk size and bridge
./runit.sh --disk-size 20G --bridge zosbr --disks 4
# Environment variables
HYPERVISOR=ch NUM_DISKS=5 ./runit.sh
```
## Size Optimization
@@ -320,7 +326,7 @@ jobs:
- name: Build
run: ./scripts/build.sh
- name: Test
run: ./scripts/test.sh --qemu
run: ./runit.sh --hypervisor qemu
```
## Advanced Usage
@@ -353,6 +359,72 @@ function build_myapp() {
}
```
### S3 Uploads (Kernel & RFS Flists)
Automatically upload build artifacts to S3-compatible storage:
#### Configuration
Create `config/rfs.conf`:
```bash
S3_ENDPOINT="https://s3.example.com:9000"
S3_REGION="us-east-1"
S3_BUCKET="zos"
S3_PREFIX="flists/zosbuilder"
S3_ACCESS_KEY="YOUR_ACCESS_KEY"
S3_SECRET_KEY="YOUR_SECRET_KEY"
```
#### Upload Kernel
```bash
# Enable kernel upload
UPLOAD_KERNEL=true ./scripts/build.sh
# Custom kernel subpath (default: kernel)
KERNEL_SUBPATH=kernels UPLOAD_KERNEL=true ./scripts/build.sh
```
**Uploaded files:**
- `s3://{bucket}/{prefix}/kernel/vmlinuz-{VERSION}-{ZINIT_HASH}.efi` - Versioned kernel
- `s3://{bucket}/{prefix}/kernel/kernels.txt` - Text index (one kernel per line)
- `s3://{bucket}/{prefix}/kernel/kernels.json` - JSON index with metadata
**Index files:**
The build automatically generates and uploads index files listing all available kernels in the S3 bucket. This enables:
- Easy kernel selection in web UIs (dropdown menus)
- Programmatic access without S3 API listing
- Metadata like upload timestamp and kernel count (JSON format)
**JSON index format:**
```json
{
"kernels": [
"vmlinuz-6.12.44-Zero-OS-abc1234.efi",
"vmlinuz-6.12.44-Zero-OS-def5678.efi"
],
"updated": "2025-01-04T12:00:00Z",
"count": 2
}
```
#### Upload RFS Flists
```bash
# Enable flist uploads
UPLOAD_MANIFESTS=true ./scripts/build.sh
```
Uploaded as:
- `s3://{bucket}/{prefix}/manifests/modules-{VERSION}.fl`
- `s3://{bucket}/{prefix}/manifests/firmware-{TAG}.fl`
#### Requirements
- MinIO Client (`mcli` or `mc`) must be installed
- Valid S3 credentials in `config/rfs.conf`
### Container Builds
Build in isolated container:
@@ -369,7 +441,7 @@ podman run --rm \
./scripts/build.sh
```
### Cross-Platform Support
### Cross-Platform Support (totally untestd)
The build system supports multiple architectures:
@@ -419,15 +491,6 @@ export DEBUG=1
./scripts/build.sh
```
### Size Analysis
```bash
# Analyze initramfs contents
./scripts/analyze-size.sh
# Show largest files
find initramfs/ -type f -exec du -h {} \; | sort -rh | head -20
```
## Contributing
1. **Fork** the repository
@@ -440,13 +503,16 @@ find initramfs/ -type f -exec du -h {} \; | sort -rh | head -20
```bash
# Setup development environment
./scripts/setup-dev.sh
./scripts/dev-container.sh start
# Run tests
./scripts/test.sh --all
# Run incremental build
./scripts/build.sh
# Check size impact
./scripts/analyze-size.sh --compare
# Test with QEMU
./runit.sh --hypervisor qemu
# Test with cloud-hypervisor
./runit.sh --hypervisor ch
```
## License

523
claude.md Normal file
View File

@@ -0,0 +1,523 @@
# Claude Code Reference for Zero-OS Builder
This document provides essential context for Claude Code (or any AI assistant) working with this Zero-OS Alpine Initramfs Builder repository.
## Project Overview
**What is this?**
A sophisticated build system for creating custom Alpine Linux 3.22 x86_64 initramfs images with zinit process management, designed for Zero-OS deployment on ThreeFold Grid.
**Key Features:**
- Container-based reproducible builds (rootless podman/docker)
- Incremental staged build pipeline with completion markers
- zinit process manager (complete OpenRC replacement)
- RFS (Remote File System) for lazy-loading modules/firmware from S3
- Rust components built with musl static linking
- Aggressive size optimization (strip + UPX)
- Embedded initramfs in kernel (single vmlinuz.efi output)
## Repository Structure
```
zosbuilder/
├── config/ # All configuration files
│ ├── build.conf # Build settings (versions, paths, flags)
│ ├── packages.list # Alpine packages to install
│ ├── sources.conf # ThreeFold components to build
│ ├── modules.conf # 2-stage kernel module loading
│ ├── firmware.conf # Firmware to include in initramfs
│ ├── kernel.config # Linux kernel configuration
│ ├── init # /init script for initramfs
│ └── zinit/ # zinit service definitions (YAML)
├── scripts/
│ ├── build.sh # Main orchestrator (DO NOT EDIT LIGHTLY)
│ ├── clean.sh # Clean all artifacts
│ ├── dev-container.sh # Persistent dev container manager
│ ├── rebuild-after-zinit.sh # Quick rebuild helper
│ ├── lib/ # Modular build libraries
│ │ ├── common.sh # Logging, path normalization, utilities
│ │ ├── stages.sh # Incremental stage tracking
│ │ ├── docker.sh # Container lifecycle
│ │ ├── alpine.sh # Alpine extraction, packages, cleanup
│ │ ├── components.sh # Build Rust components from sources.conf
│ │ ├── initramfs.sh # Assembly, optimization, CPIO creation
│ │ └── kernel.sh # Kernel download, config, build, embed
│ └── rfs/ # RFS flist generation scripts
│ ├── common.sh # S3 config, version computation
│ ├── pack-modules.sh # Create modules flist
│ ├── pack-firmware.sh # Create firmware flist
│ └── verify-flist.sh # Inspect/test flists
├── docs/ # Detailed documentation
│ ├── NOTES.md # Operational knowledge & troubleshooting
│ ├── PROMPT.md # Agent guidance (strict debugger mode)
│ ├── TODO.md # Persistent checklist with code refs
│ ├── AGENTS.md # Quick reference for agents
│ ├── rfs-flists.md # RFS design and runtime flow
│ ├── review-rfs-integration.md # Integration points
│ └── depmod-behavior.md # Module dependency details
├── runit.sh # Test runner (QEMU/cloud-hypervisor)
├── initramfs/ # Generated initramfs tree
├── components/ # Generated component sources
├── kernel/ # Generated kernel source
├── dist/ # Final outputs
│ ├── vmlinuz.efi # Kernel with embedded initramfs
│ └── initramfs.cpio.xz # Standalone initramfs archive
└── .build-stages/ # Incremental build markers (*.done files)
```
## Core Concepts
### 1. Incremental Staged Builds
**How it works:**
- Each stage creates a `.build-stages/<stage_name>.done` marker on success
- Subsequent builds skip completed stages unless forced
- Use `./scripts/build.sh --show-stages` to see status
- Use `./scripts/build.sh --rebuild-from=<stage>` to restart from a specific stage
- Manually remove `.done` files to re-run specific stages
**Build stages (in order):**
```
alpine_extract → alpine_configure → alpine_packages → alpine_firmware
→ components_build → components_verify → kernel_modules
→ init_script → components_copy → zinit_setup
→ modules_setup → modules_copy → cleanup → rfs_flists
→ validation → initramfs_create → initramfs_test → kernel_build
```
**Key insight:** The build ALWAYS runs inside a container. Host invocations auto-spawn containers.
### 2. Container-First Architecture
**Why containers?**
- Reproducible toolchain (Alpine 3.22 base with exact dependencies)
- Rootless execution (no privileged access needed)
- Isolation from host environment
- GitHub Actions compatible
**Container modes:**
- **Transient:** `./scripts/build.sh` spawns, builds, exits
- **Persistent:** `./scripts/dev-container.sh start/shell/build`
**Important:** Directory paths are normalized to absolute PROJECT_ROOT to avoid CWD issues when stages change directories (especially kernel builds).
### 3. Component Build System
**sources.conf format:**
```
TYPE:NAME:URL:VERSION:BUILD_FUNCTION[:EXTRA]
```
**Example:**
```bash
git:zinit:https://github.com/threefoldtech/zinit:master:build_zinit
git:rfs:https://github.com/threefoldtech/rfs:development:build_rfs
git:mycelium:https://github.com/threefoldtech/mycelium:0.6.1:build_mycelium
release:corex:https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static:2.1.4:install_corex:rename=corex
```
**Build functions** are defined in `scripts/lib/components.sh` and handle:
- Rust builds with `x86_64-unknown-linux-musl` target
- Static linking via `RUSTFLAGS="-C target-feature=+crt-static"`
- Special cases (e.g., mycelium builds in `myceliumd/` subdirectory)
### 4. RFS Flists (Remote File System)
**Purpose:** Lazy-load kernel modules and firmware from S3 at runtime
**Flow:**
1. Build stage creates flists: `modules-<KERNEL_VERSION>.fl` and `firmware-<TAG>.fl`
2. Flists are SQLite databases containing:
- Content-addressed blob references
- S3 store URIs (patched for read-only access)
- Directory tree metadata
3. Flists embedded in initramfs at `/etc/rfs/`
4. Runtime: zinit units mount flists over `/lib/modules/` and `/lib/firmware/`
5. Dual udev coldplug: early (before RFS) for networking, post-RFS for new hardware
**Key files:**
- `scripts/rfs/pack-modules.sh` - Creates modules flist from container `/lib/modules/`
- `scripts/rfs/pack-firmware.sh` - Creates firmware flist from Alpine packages
- `config/zinit/init/modules.sh` - Runtime mount script
- `config/zinit/init/firmware.sh` - Runtime mount script
### 5. zinit Service Management
**No OpenRC:** This system uses zinit exclusively for process management.
**Service graph:**
```
/init → zinit → [stage1-modules, udevd, depmod]
→ udev-trigger (early coldplug)
→ network
→ rfs-modules + rfs-firmware (mount flists)
→ udev-rfs (post-RFS coldplug)
→ services
```
**Service definitions:** YAML files in `config/zinit/` with `after:`, `needs:`, `wants:` dependencies
### 6. Kernel Versioning and S3 Upload
**Versioned Kernel Output:**
- Standard kernel: `dist/vmlinuz.efi` (for compatibility)
- Versioned kernel: `dist/vmlinuz-{VERSION}-{ZINIT_HASH}.efi`
- Example: `vmlinuz-6.12.44-Zero-OS-a1b2c3d.efi`
**Version components:**
- `{VERSION}`: Full kernel version from `KERNEL_VERSION` + `CONFIG_LOCALVERSION`
- `{ZINIT_HASH}`: Short git commit hash from `components/zinit/.git`
**S3 Upload (optional):**
- Controlled by `UPLOAD_KERNEL=true` environment variable
- Uses MinIO client (`mcli` or `mc`) to upload to S3-compatible storage
- Uploads versioned kernel to: `s3://{bucket}/{prefix}/kernel/{versioned_filename}`
**Kernel Index Generation:**
After uploading, automatically generates and uploads index files:
- `kernels.txt` - Plain text, one kernel per line, sorted reverse chronologically
- `kernels.json` - JSON format with metadata (timestamp, count)
**Why index files?**
- S3 web interfaces often don't support directory listings
- Enables dropdown menus in web UIs without S3 API access
- Provides kernel discovery for deployment tools
**JSON index structure:**
```json
{
"kernels": ["vmlinuz-6.12.44-Zero-OS-abc1234.efi", ...],
"updated": "2025-01-04T12:00:00Z",
"count": 2
}
```
**Key functions:**
- `get_git_commit_hash()` in `scripts/lib/common.sh` - Extracts git hash
- `kernel_build_with_initramfs()` in `scripts/lib/kernel.sh` - Creates versioned kernel
- `kernel_upload_to_s3()` in `scripts/lib/kernel.sh` - Uploads to S3
- `kernel_generate_index()` in `scripts/lib/kernel.sh` - Generates and uploads index
## Critical Conventions
### Path Normalization
**Problem:** Stages can change CWD (kernel build uses `/workspace/kernel/current`)
**Solution:** All paths normalized to absolute at startup in `scripts/lib/common.sh:244`
**Variables affected:**
- `INSTALL_DIR` (initramfs/)
- `COMPONENTS_DIR` (components/)
- `KERNEL_DIR` (kernel/)
- `DIST_DIR` (dist/)
**Never use relative paths** when calling functions that might be in different CWDs.
### Branding and Security
**Passwordless root enforcement:**
- Applied in `scripts/lib/initramfs.sh:575` via `passwd -d -R "${initramfs_dir}" root`
- Creates `root::` in `/etc/shadow` (empty password field)
- Controlled by `ZEROOS_BRANDING` and `ZEROOS_PASSWORDLESS_ROOT` flags
**Never edit /etc/shadow manually** - always use `passwd` or `chpasswd` with chroot.
### Module Loading Strategy
**2-stage approach:**
- **Stage 1:** Critical boot modules (virtio, e1000, scsi) - loaded by zinit early
- **Stage 2:** Extended hardware (igb, ixgbe, i40e) - loaded after network
**Config:** `config/modules.conf` with `stage1:` and `stage2:` prefixes
**Dependency resolution:**
- Uses `modinfo` to build dependency tree
- Resolves from container `/lib/modules/<FULL_VERSION>/`
- Must run after `kernel_modules` stage
### Firmware Policy
**For initramfs:** `config/firmware.conf` is the SINGLE source of truth
- Any firmware hints in `modules.conf` are IGNORED
- Prevents duplication/version mismatches
**For RFS:** Full Alpine `linux-firmware*` packages installed in container
- Packed from container `/lib/firmware/`
- Overmounts at runtime for extended hardware
## Common Workflows
### Full Build from Scratch
```bash
# Clean everything and rebuild
./scripts/build.sh --clean
# Or just rebuild all stages
./scripts/build.sh --force-rebuild
```
### Quick Iteration After Config Changes
```bash
# After editing zinit configs, init script, or modules.conf
./scripts/rebuild-after-zinit.sh
# With kernel rebuild
./scripts/rebuild-after-zinit.sh --with-kernel
# Dry-run to see what changed
./scripts/rebuild-after-zinit.sh --verify-only
```
### Minimal Manual Rebuild
```bash
# Remove specific stages
rm -f .build-stages/initramfs_create.done
rm -f .build-stages/validation.done
# Rebuild only those stages
DEBUG=1 ./scripts/build.sh
```
### Testing the Built Kernel
```bash
# QEMU (default)
./runit.sh
# cloud-hypervisor with 5 disks
./runit.sh --hypervisor ch --disks 5 --reset
# Custom memory and bridge
./runit.sh --memory 4096 --bridge zosbr
```
### Persistent Dev Container
```bash
# Start persistent container
./scripts/dev-container.sh start
# Enter shell
./scripts/dev-container.sh shell
# Run build inside
./scripts/dev-container.sh build
# Stop container
./scripts/dev-container.sh stop
```
## Debugging Guidelines
### Diagnostics-First Approach
**ALWAYS add diagnostics before fixes:**
1. Enable `DEBUG=1` for verbose safe_execute logs
2. Add strategic `log_debug` statements
3. Confirm hypothesis in logs
4. Then apply minimal fix
**Example:**
```bash
# Bad: Guess and fix
Edit file to fix suspected issue
# Good: Diagnose first
1. Add log_debug "Variable X=${X}, resolved=${resolved_path}"
2. DEBUG=1 ./scripts/build.sh
3. Confirm in output
4. Apply fix with evidence
```
### Key Diagnostic Functions
- `scripts/lib/common.sh`: `log_info`, `log_warn`, `log_error`, `log_debug`
- `scripts/lib/initramfs.sh:820`: Validation debug prints (input, PWD, PROJECT_ROOT, resolved paths)
- `scripts/lib/initramfs.sh:691`: Pre-CPIO sanity checks with file listings
### Common Issues and Solutions
**"Initramfs directory not found"**
- **Cause:** INSTALL_DIR interpreted as relative in different CWD
- **Fix:** Already patched - paths normalized at startup
- **Check:** Look for "Validation debug:" logs showing resolved paths
**"INITRAMFS_ARCHIVE unbound"**
- **Cause:** Incremental build skipped initramfs_create stage
- **Fix:** Already patched - stages default INITRAMFS_ARCHIVE if unset
- **Check:** `scripts/build.sh:401` logs "defaulting INITRAMFS_ARCHIVE"
**"Module dependency resolution fails"**
- **Cause:** Container `/lib/modules/<FULL_VERSION>` missing or stale
- **Fix:** `./scripts/rebuild-after-zinit.sh --refresh-container-mods`
- **Check:** Ensure `kernel_modules` stage completed successfully
**"Passwordless root not working"**
- **Cause:** Branding disabled or shadow file not updated
- **Fix:** Check ZEROOS_BRANDING=true in logs, verify /etc/shadow has `root::`
- **Verify:** Extract initramfs and `grep '^root:' etc/shadow`
## Important Files Quick Reference
### Must-Read Before Editing
- `scripts/build.sh` - Orchestrator with precise stage order
- `scripts/lib/common.sh` - Path normalization, logging, utilities
- `scripts/lib/stages.sh` - Stage tracking logic
- `config/build.conf` - Version pins, directory settings, flags
### Safe to Edit
- `config/zinit/*.yaml` - Service definitions
- `config/zinit/init/*.sh` - Runtime initialization scripts
- `config/modules.conf` - Module lists (stage1/stage2)
- `config/firmware.conf` - Initramfs firmware selection
- `config/packages.list` - Alpine packages
### Generated (Never Edit)
- `initramfs/` - Assembled initramfs tree
- `components/` - Downloaded component sources
- `kernel/` - Kernel source tree
- `dist/` - Build outputs
- `.build-stages/` - Completion markers
## Testing Architecture
**No built-in tests during build** - Tests run separately via `runit.sh`
**Why?**
- Build is for assembly, not validation
- Tests require hypervisor (QEMU/cloud-hypervisor)
- Separation allows faster iteration
**runit.sh features:**
- Multi-disk support (qcow2 for QEMU, raw for cloud-hypervisor)
- Network bridge/TAP configuration
- Persistent volumes (reset with `--reset`)
- Serial console logging
## Quick Command Reference
```bash
# Build
./scripts/build.sh # Incremental build
./scripts/build.sh --clean # Clean build
./scripts/build.sh --show-stages # Show completion status
./scripts/build.sh --rebuild-from=zinit_setup # Rebuild from stage
DEBUG=1 ./scripts/build.sh # Verbose output
# Rebuild helpers
./scripts/rebuild-after-zinit.sh # After zinit/init/modules changes
./scripts/rebuild-after-zinit.sh --with-kernel # Also rebuild kernel
./scripts/rebuild-after-zinit.sh --verify-only # Dry-run
# Testing
./runit.sh # QEMU test
./runit.sh --hypervisor ch # cloud-hypervisor test
./runit.sh --help # All options
# Dev container
./scripts/dev-container.sh start # Start persistent container
./scripts/dev-container.sh shell # Enter shell
./scripts/dev-container.sh build # Build inside container
./scripts/dev-container.sh stop # Stop container
# Cleanup
./scripts/clean.sh # Remove all generated files
rm -rf .build-stages/ # Reset stage markers
```
## Environment Variables
**Build control:**
- `DEBUG=1` - Enable verbose logging
- `FORCE_REBUILD=true` - Force rebuild all stages
- `REBUILD_FROM_STAGE=<name>` - Rebuild from specific stage
**Version overrides:**
- `ALPINE_VERSION=3.22` - Alpine Linux version
- `KERNEL_VERSION=6.12.44` - Linux kernel version
- `RUST_TARGET=x86_64-unknown-linux-musl` - Rust compilation target
**Firmware tagging:**
- `FIRMWARE_TAG=20250908` - Firmware flist version tag
**S3 upload control:**
- `UPLOAD_KERNEL=true` - Upload versioned kernel to S3 (default: false)
- `UPLOAD_MANIFESTS=true` - Upload RFS flists to S3 (default: false)
- `KERNEL_SUBPATH=kernel` - S3 subpath for kernel uploads (default: kernel)
**S3 configuration:**
- See `config/rfs.conf` for S3 endpoint, credentials, paths
- Used by both RFS flist uploads and kernel uploads
## Documentation Hierarchy
**Start here:**
1. `README.md` - User-facing guide with features and setup
2. This file (`claude.md`) - AI assistant context
**For development:**
3. `docs/NOTES.md` - Operational knowledge, troubleshooting
4. `docs/AGENTS.md` - Quick agent reference
5. `docs/TODO.md` - Current work checklist with code links
**For deep dives:**
6. `docs/PROMPT.md` - Strict debugger agent mode (diagnostics-first)
7. `docs/rfs-flists.md` - RFS design and implementation
8. `docs/review-rfs-integration.md` - Integration points analysis
9. `docs/depmod-behavior.md` - Module dependency deep dive
**Historical:**
10. `IMPLEMENTATION_PLAN.md` - Original design document
11. `GITHUB_ACTIONS.md` - CI/CD setup guide
## Project Philosophy
1. **Reproducibility:** Container-based builds ensure identical results
2. **Incrementality:** Stage markers minimize rebuild time
3. **Diagnostics-first:** Log before fixing, validate assumptions
4. **Minimal intervention:** Alpine + zinit only, no systemd/OpenRC
5. **Size-optimized:** Aggressive cleanup, strip, UPX compression
6. **Remote-ready:** RFS enables lazy-loading for extended hardware support
## Commit Message Guidelines
**DO NOT add Claude Code or AI assistant references to commit messages.**
Keep commits clean and professional:
- Focus on what changed and why
- Use conventional commit prefixes: `build:`, `docs:`, `fix:`, `feat:`, `refactor:`
- Be concise but descriptive
- No emoji unless project convention
- No "Generated with Claude Code" or "Co-Authored-By: Claude" footers
**Good example:**
```
build: remove testing.sh in favor of runit.sh
Replace inline boot testing with standalone runit.sh runner.
Tests now run separately from build pipeline for faster iteration.
```
**Bad example:**
```
build: remove testing.sh 🤖
Made some changes to testing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
```
## Keywords for Quick Search
- **Build fails:** Check `DEBUG=1` logs, stage completion markers, container state
- **Module issues:** `kernel_modules` stage, `CONTAINER_MODULES_PATH`, depmod logs
- **Firmware missing:** `config/firmware.conf` for initramfs, RFS flist for runtime
- **zinit problems:** Service YAML syntax, dependency order, init script errors
- **Path errors:** Absolute path normalization in `common.sh:244`
- **Size too large:** Check cleanup stage, strip/UPX execution, package list
- **Container issues:** Rootless setup, subuid/subgid, podman vs docker
- **RFS mount fails:** S3 credentials, network readiness, flist manifest paths
- **Kernel upload:** `UPLOAD_KERNEL=true`, requires `config/rfs.conf`, MinIO client (`mcli`/`mc`)
- **Kernel index:** Auto-generated `kernels.txt`/`kernels.json` for dropdown UIs, updated on upload
---
**Last updated:** 2025-01-04
**Maintainer notes:** This file is the entry point for AI assistants. Keep it updated when architecture changes. Cross-reference with `docs/NOTES.md` for operational details.

View File

@@ -3,7 +3,7 @@
# System versions
ALPINE_VERSION="3.22"
KERNEL_VERSION="6.12.44"
KERNEL_VERSION="6.12.49"
# Rust configuration
RUST_TARGET="x86_64-unknown-linux-musl"
@@ -34,11 +34,34 @@ DIST_DIR="dist"
ALPINE_MIRROR="https://dl-cdn.alpinelinux.org/alpine"
KERNEL_SOURCE_URL="https://cdn.kernel.org/pub/linux/kernel"
# RFS flists (firmware manifest naming)
# FIRMWARE_TAG controls firmware flist manifest naming for reproducible builds.
# - If set, firmware manifest becomes: firmware-$FIRMWARE_TAG.fl
# - If unset, the build embeds firmware-latest.fl, while standalone pack may default to date-based naming.
# Examples:
# FIRMWARE_TAG="20250908"
# FIRMWARE_TAG="v1"
#FIRMWARE_TAG="latest"
# Branding and customization guard (default off)
# Set to "true" to enable Zero-OS branding and passwordless root in initramfs.
# Both variables are accepted; ZEROOS_BRANDING takes precedence if both set.
ZEROOS_BRANDING="true"
ZEROOS_REBRANDING="true"
# Root account configuration
# Provide either ZEROOS_ROOT_PASSWORD_HASH (preferred, SHA-512 crypt) or ZEROOS_ROOT_PASSWORD (plain, will be hashed during build)
# Legacy variable names also supported: ROOT_PASSWORD_HASH / ROOT_PASSWORD
# Passwordless root is the default for branded builds when no password is provided.
ZEROOS_PASSWORDLESS_ROOT="true"
# ZEROOS_ROOT_PASSWORD_HASH="" # optional, preferred when setting a password
# ZEROOS_ROOT_PASSWORD="" # optional, dev-only; if set, overrides passwordless
# Feature flags
ENABLE_STRIP="true"
ENABLE_UPX="true"
ENABLE_AGGRESSIVE_CLEANUP="true"
ENABLE_2STAGE_MODULES="true"
UPLOAD_KERNEL=true
# Debug and development
DEBUG_DEFAULT="0"

View File

@@ -2,12 +2,15 @@
# Alpine Linux provides separate firmware packages for hardware support
# Format: FIRMWARE_PACKAGE:DESCRIPTION
# Essential network firmware packages
linux-firmware-bnx2:Broadcom NetXtreme firmware
# Essential network firmware packages (wired NICs matching stage1 drivers)
linux-firmware-bnx2:Broadcom NetXtreme (bnx2) firmware
linux-firmware-bnx2x:Broadcom NetXtreme II (bnx2x) firmware
linux-firmware-tigon:Broadcom tg3 (Tigon) firmware
linux-firmware-e100:Intel PRO/100 firmware
linux-firmware-intel:Intel network and WiFi firmware (includes e1000e, igb, ixgbe, i40e, ice)
linux-firmware-realtek:Realtek network firmware (r8169, etc.)
linux-firmware-qlogic:QLogic network firmware
linux-firmware-intel:Intel NIC firmware (covers e1000e, igb, ixgbe, i40e, ice)
linux-firmware-rtl_nic:Realtek NIC firmware (r8169, etc.)
linux-firmware-realtek:Realtek NIC firmware (meta)
linux-firmware-qlogic:QLogic NIC firmware
# Storage firmware (if needed)
# linux-firmware-marvell:Marvell storage/network firmware (not available in Alpine 3.22)

View File

@@ -1,4 +1,4 @@
#!/bin/sh -x
#!/bin/sh
# Alpine-based Zero-OS Init Script
# Maintains identical flow to original busybox version
@@ -24,6 +24,9 @@ mount -t devtmpfs devtmpfs /dev
mkdir -p /dev/pts
mount -t devpts devpts /dev/pts
# Re-initialize modules dependencies for basic init
depmod -a
echo "[+] building ram filesystem"
@@ -46,6 +49,8 @@ echo " copying /var..."
cp -ar /var $target
echo " copying /run..."
cp -ar /run $target
echo " creating /home"
mkdir $target/home
# Create essential directories
mkdir -p $target/dev
@@ -71,47 +76,66 @@ if [ -x /sbin/udevd ]; then
echo " starting udevd..."
udevd --daemon
# Preload keyboard input modules early so console works before zinit and rfs mounts
echo "[+] preloading keyboard input modules"
for m in i8042 atkbd usbhid hid hid_generic evdev xhci_pci xhci_hcd ehci_pci ehci_hcd ohci_pci ohci_hcd uhci_hcd; do
modprobe "$m"
done
echo "[+] loading essential drivers"
# Load core drivers for storage and network
/bin/busybox sh -l
modprobe btrfs 2>/dev/null || true
modprobe fuse 2>/dev/null || true
modprobe overlay 2>/dev/null || true
# Load storage drivers
modprobe ahci 2>/dev/null || true
modprobe nvme 2>/dev/null || true
modprobe virtio_blk 2>/dev/null || true
modprobe virtio_scsi 2>/dev/null || true
modprobe virtio_pci 2>/dev/null || true
# Load network drivers
modprobe virtio_net 2>/dev/null || true
echo " triggering device discovery..."
udevadm trigger --action=add --type=subsystems
udevadm trigger --action=add --type=devices
udevadm trigger --action=add
udevadm settle
echo " stopping udevd..."
kill $(pidof udevd) || true
else
echo " warning: udevd not found, skipping hardware detection"
fi
echo "[+] loading essential drivers"
# Load core drivers for storage and network
modprobe btrfs 2>/dev/null || true
modprobe fuse 2>/dev/null || true
modprobe overlay 2>/dev/null || true
# Load storage drivers
modprobe ahci 2>/dev/null || true
modprobe nvme 2>/dev/null || true
modprobe virtio_blk 2>/dev/null || true
modprobe virtio_scsi 2>/dev/null || true
modprobe virtio_pci 2>/dev/null || true
# Load network drivers
modprobe virtio_net 2>/dev/null || true
modprobe e1000 2>/dev/null || true
modprobe e1000e 2>/dev/null || true
echo "[+] debug hook: initdebug=true or /init-debug"
if grep -qw "initdebug=true" /proc/cmdline; then
if [ -x /init-debug ]; then
echo " initdebug=true: executing /init-debug ..."
sh /init-debug
else
echo " initdebug=true: starting interactive shell (no /init-debug). Type 'exit' to continue."
debug="-d"
/bin/busybox sh
fi
elif [ -x /init-debug ]; then
echo " executing /init-debug ..."
sh /init-debug
fi
# Unmount init filesystems
umount /proc 2>/dev/null || true
umount /sys 2>/dev/null || true
echo "[+] checking for debug files"
if [ -e /init-debug ]; then
echo " executing debug script..."
sh /init-debug
fi
echo "[+] switching root"
echo " exec switch_root /mnt/root /sbin/zinit init"
exec switch_root /mnt/root /sbin/zinit init
exec switch_root /mnt/root /sbin/zinit ${debug} init
# switch_root failed, drop into shell
/bin/busybox sh
##

File diff suppressed because it is too large Load Diff

View File

@@ -1,38 +1,77 @@
# Module loading specification for Zero-OS Alpine initramfs
# Format: STAGE:MODULE_NAME:FIRMWARE_PACKAGE (optional)
# Focus on most common NIC modules to ensure networking works on most hardware
# Format: STAGE:MODULE
# Firmware selection is authoritative in config/firmware.conf; do not add firmware hints here.
# Stage 1: Core subsystems + networking and essential boot modules
# All core subsystems and NICs must be loaded BEFORE network can come up
stage1:virtio:none # Core virtio subsystem (REQUIRED)
stage1:virtio_ring:none # Virtio ring buffer (REQUIRED)
stage1:virtio_pci:none # Virtio PCI bus
stage1:virtio_net:none # Virtio network (VMs, cloud)
stage1:virtio_scsi:none # Virtio SCSI (VMs, cloud)
stage1:virtio_blk:none # Virtio block (VMs, cloud)
stage1:e1000:linux-firmware-intel # Intel E1000 (very common)
stage1:e1000e:linux-firmware-intel # Intel E1000E (very common)
stage1:r8169:linux-firmware-realtek # Realtek (most common desktop/server)
stage1:igb:linux-firmware-intel # Intel Gigabit (servers)
stage1:ixgbe:linux-firmware-intel # Intel 10GbE (servers)
stage1:i40e:linux-firmware-intel # Intel 40GbE (modern servers)
stage1:ice:linux-firmware-intel # Intel E800 series (latest)
stage1:8139too:none # Realtek 8139 (legacy)
stage1:8139cp:none # Realtek 8139C+ (legacy)
stage1:bnx2:linux-firmware-bnx2 # Broadcom NetXtreme
stage1:bnx2x:linux-firmware-bnx2 # Broadcom NetXtreme II
stage1:tg3:none # Broadcom Tigon3
stage1:b44:none # Broadcom 44xx
stage1:atl1:none # Atheros L1
stage1:atl1e:none # Atheros L1E
stage1:atl1c:none # Atheros L1C
stage1:alx:none # Atheros Alx
stage1:libata:none # Core ATA subsystem (REQUIRED)
stage1:scsi_mod:none # SCSI subsystem
stage1:sd_mod:none # SCSI disk support
stage1:ahci:none # SATA AHCI
stage1:nvme_core:none # Core NVMe subsystem (REQUIRED)
stage1:nvme:none # NVMe storage
stage1:tun:none # TUN/TAP for networking
stage1:overlay:none # OverlayFS for containers
# Stage 1: boot-critical storage, core virtio, networking, overlay, and input (USB/HID/keyboard)
# Storage
stage1:libata
stage1:libahci
stage1:ahci
stage1:scsi_mod
stage1:sd_mod
stage1:nvme_core
stage1:nvme
stage1:virtio_blk
stage1:virtio_scsi
# Core virtio
stage1:virtio
stage1:virtio_ring
stage1:virtio_pci
stage1:virtio_pci_legacy_dev
stage1:virtio_pci_modern_dev
# Networking (common NICs)
stage1:virtio_net
stage1:e1000
stage1:e1000e
stage1:igb
stage1:ixgbe
stage1:igc
stage1:i40e
stage1:ice
stage1:r8169
stage1:8139too
stage1:8139cp
stage1:bnx2
stage1:bnx2x
stage1:tg3
stage1:tun
# PHY drivers
stage1:realtek
# Broadcom PHY families (required for many Broadcom NICs)
stage1:broadcom
stage1:bcm7xxx
stage1:bcm87xx
stage1:bcm_phy_lib
stage1:bcm_phy_ptp
# Filesystems / overlay
stage1:overlay
stage1:fuse
# USB host controllers and HID/keyboard input
stage1:xhci_pci
stage1:xhci_hcd
stage1:ehci_pci
stage1:ehci_hcd
stage1:ohci_pci
stage1:ohci_hcd
stage1:uhci_hcd
stage1:usbhid
stage1:hid_generic
stage1:hid
stage1:atkbd
stage1:libps2
stage1:i8042
stage1:evdev
stage1:serio_raw
stage1:serio
# zos core networking is with openvswitch vxlan over mycelium
openvswitch
# Keep stage2 empty; we only use stage1 in this build
# stage2: (intentionally unused)

View File

@@ -7,6 +7,7 @@ alpine-baselayout
alpine-baselayout-data
busybox
musl
agetty
# Module loading & hardware detection
eudev
@@ -14,17 +15,24 @@ eudev-hwids
eudev-libs
eudev-netifnames
kmod
fuse3
pciutils
efitools
efibootmgr
# Console/terminal management
util-linux
wget
# Essential networking (for Zero-OS connectivity)
iproute2
ethtool
nftables
# Filesystem support (minimal)
btrfs-progs
dosfstools
sgdisk
# Essential libraries only
zlib
@@ -41,7 +49,3 @@ haveged
openssh-server
zellij
# Essential debugging and monitoring tools included
# NO development tools, NO curl/wget, NO python, NO redis
# NO massive linux-firmware package
# Other tools will be loaded from RFS after network connectivity

79
config/rfs.conf.example Normal file
View File

@@ -0,0 +1,79 @@
# RFS S3 (Garage) configuration for flist storage and HTTP read endpoint
# Copy this file to config/rfs.conf and fill in real values (do not commit secrets).
# S3 API endpoint of your Garage server, including scheme and optional port
# Examples:
# https://hub.grid.tf
# http://minio:9000
S3_ENDPOINT="https://hub.grid.tf"
# AWS region string expected by the S3-compatible API
S3_REGION="garage"
# Bucket and key prefix used for RFS store (content-addressed blobs)
# The RFS store path will be: s3://.../<S3_BUCKET>/<S3_PREFIX>
S3_BUCKET="zos"
S3_PREFIX="zos/store"
# Access credentials (required by rfs pack to push blobs)
S3_ACCESS_KEY="REPLACE_ME"
S3_SECRET_KEY="REPLACE_ME"
# Optional: HTTP(S) web endpoint used at runtime to fetch blobs without signed S3
# This is the base URL that serves the same objects as the S3 store, typically a
# public or authenticated gateway in front of Garage that allows read access.
# The scripts will patch the .fl (sqlite) stores table to use this endpoint.
# Ensure this path maps to the same content-addressed layout expected by rfs.
# Example:
# https://hub.grid.tf/zos/zosbuilder/store
WEB_ENDPOINT="https://hub.grid.tf/zos/zosbuilder/store"
# Optional: where to upload the .fl manifest sqlite file (separate from blob store)
# If you want to keep manifests alongside blobs, a common pattern is:
# s3://<S3_BUCKET>/<S3_PREFIX>/manifests/
# Scripts will create manifests/ under S3_PREFIX automatically if left default.
MANIFESTS_SUBPATH="manifests"
# Behavior flags (can be overridden by CLI flags or env)
# Whether to keep s3:// store as a fallback entry in the .fl after adding WEB_ENDPOINT
KEEP_S3_FALLBACK="true"
# Whether to attempt uploading .fl manifests to S3 (requires MinIO Client: mc)
UPLOAD_MANIFESTS="true"
# Read-only credentials for route URL in manifest (optional; defaults to write keys above)
# These will be embedded into the flist 'route.url' so runtime mounts can read directly from Garage.
# If not set, scripts fall back to S3_ACCESS_KEY/S3_SECRET_KEY.
READ_ACCESS_KEY="REPLACE_ME_READ"
READ_SECRET_KEY="REPLACE_ME_READ"
# Route endpoint and parameters for flist route URL patching
# - ROUTE_ENDPOINT: host:port base for Garage gateway (scheme is ignored; host:port is extracted)
# If not set, defaults to S3_ENDPOINT
# - ROUTE_PATH: path to the blob route (default: /blobs)
# - ROUTE_REGION: region string for Garage (default: garage)
ROUTE_ENDPOINT="https://hub.grid.tf"
ROUTE_PATH="/zos/store"
ROUTE_REGION="garage"
# RESP/DB-style blob store (design-time placeholders; optional)
# Enable to allow pack scripts or future rfs CLI to upload blobs to a RESP-compatible store.
# This does not change the existing S3 flow; RESP acts as an additional backend.
#
# Example URI semantics (see docs/rfs-flists.md additions):
# resp://host:port/db?prefix=blobs
# resp+tls://host:port/db?prefix=blobs&amp;ca=/etc/ssl/certs/ca.pem
# resp+sentinel://sentinelHost:26379/mymaster?prefix=blobs
#
# Minimal keys for a direct RESP endpoint
RESP_ENABLED="false"
RESP_ENDPOINT="localhost:6379" # host:port
RESP_DB="0" # integer DB index
RESP_PREFIX="zos/blobs" # namespace/prefix for content-addressed keys
RESP_USERNAME="" # optional
RESP_PASSWORD="" # optional
RESP_TLS="false" # true/false
RESP_CA="" # path to CA bundle when RESP_TLS=true
# Optional: Sentinel topology (overrides RESP_ENDPOINT for discovery)
RESP_SENTINEL="" # sentinelHost:port (comma-separated for multiple)
RESP_MASTER="" # Sentinel master name (e.g., "mymaster")

View File

@@ -4,7 +4,9 @@
# Git repositories to clone and build
git zinit https://github.com/threefoldtech/zinit master build_zinit
git mycelium https://github.com/threefoldtech/mycelium v0.6.1 build_mycelium
git zosstorage https://git.ourworld.tf/delandtj/zosstorage main build_zosstorage
git youki git@github.com:youki-dev/youki.git v0.5.7 build_youki
git rfs https://github.com/threefoldtech/rfs development build_rfs
# Pre-built releases to download
release rfs https://github.com/threefoldtech/rfs/releases/download/v2.0.6/rfs v2.0.6 install_rfs
# release rfs https://github.com/threefoldtech/rfs/releases/download/v2.0.6/rfs v2.0.6 install_rfs
release corex https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static 2.1.4 install_corex rename=corex

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/cgroup.sh
oneshot: true

View File

@@ -1 +0,0 @@
exec: depmod -a

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty -L 115200 ttyS0 vt100
restart: always

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty console linux
restart: always

View File

@@ -1,2 +0,0 @@
exec: haveged -w 1024 -d 32 -i 32 -v 1
oneshot: true

View File

@@ -1,6 +0,0 @@
#!/bin/bash
echo "start ash terminal"
while true; do
getty -l /bin/ash -n 19200 tty2
done

View File

@@ -1,10 +0,0 @@
set -x
mount -t tmpfs cgroup_root /sys/fs/cgroup
subsys="pids cpuset cpu cpuacct blkio memory devices freezer net_cls perf_event net_prio hugetlb"
for sys in $subsys; do
mkdir -p /sys/fs/cgroup/$sys
mount -t cgroup $sys -o $sys /sys/fs/cgroup/$sys/
done

View File

@@ -1,10 +0,0 @@
#!/bin/bash
modprobe fuse
modprobe btrfs
modprobe tun
modprobe br_netfilter
echo never > /sys/kernel/mm/transparent_hugepage/enabled
ulimit -n 524288

View File

@@ -1,10 +0,0 @@
#!/bin/sh
ntp_flags=$(grep -o 'ntp=.*' /proc/cmdline | sed 's/^ntp=//')
params=""
if [ -n "$ntp_flags" ]; then
params=$(echo "-p $ntp_flags" | sed s/,/' -p '/g)
fi
exec ntpd -n $params

View File

@@ -1,4 +0,0 @@
#!/bin/bash
echo "Enable ip forwarding"
echo 1 > /proc/sys/net/ipv4/ip_forward

View File

@@ -1,3 +0,0 @@
#!/bin/sh
mkdir /dev/shm
mount -t tmpfs shm /dev/shm

View File

@@ -1,15 +0,0 @@
#!/bin/ash
if [ -f /etc/ssh/ssh_host_rsa_key ]; then
# ensure existing file permissions
chown root:root /etc/ssh/ssh_host_*
chmod 600 /etc/ssh/ssh_host_*
exit 0
fi
echo "Setting up sshd"
mkdir -p /run/sshd
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
ssh-keygen -f /etc/ssh/ssh_host_ecdsa_key -N '' -t ecdsa -b 521
ssh-keygen -f /etc/ssh/ssh_host_ed25519_key -N '' -t ed25519

View File

@@ -1,4 +0,0 @@
#!/bin/sh
udevadm trigger --action=add
udevadm settle

View File

@@ -1,2 +0,0 @@
exec: ip l set lo up
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/modprobe.sh
oneshot: true

View File

@@ -1,6 +0,0 @@
exec: /usr/bin/mycelium --key-file /tmp/mycelium_priv_key.bin
--tun-name my0 --silent --peers tcp://188.40.132.242:9651 tcp://136.243.47.186:9651
tcp://185.69.166.7:9651 tcp://185.69.166.8:9651 tcp://65.21.231.58:9651 tcp://65.109.18.113:9651
tcp://209.159.146.190:9651 tcp://5.78.122.16:9651 tcp://5.223.43.251:9651 tcp://142.93.217.194:9651
after:
- network

View File

@@ -1,5 +0,0 @@
exec: dhcpcd eth0
after:
- depmod
- udevd
- udev-trigger

View File

@@ -1,3 +0,0 @@
exec: sh /etc/zinit/init/ntpd.sh
after:
- network

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/routing.sh
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: /etc/zinit/init/shm.sh
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/sshd-setup.sh
oneshot: true

View File

@@ -1,3 +0,0 @@
exec: /usr/sbin/sshd -D -e
after:
- sshd-setup

View File

@@ -1,6 +0,0 @@
exec: sh /etc/zinit/init/udev.sh
oneshot: true
after:
- depmod
- udevmon
- udevd

View File

@@ -1 +0,0 @@
exec: udevd

View File

@@ -1 +0,0 @@
exec: udevadm monitor

View File

@@ -1,33 +0,0 @@
# Main zinit configuration for Zero OS Alpine
# This replaces OpenRC completely
# Logging configuration
log_level: debug
log_file: /var/log/zinit/zinit.log
# Initialization phases
init:
# Phase 1: Critical system setup
- stage1-modules
- udevd
- depmod
# Phase 2: Extended hardware and networking
- stage2-modules
- network
- lo
# Phase 3: System services
- routing
- ntp
- haveged
# Phase 4: User services
- sshd-setup
- sshd
- getty
- console
- gettyconsole
# Service dependencies and ordering managed by individual service files
# All services are defined in the services/ subdirectory

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty -L 9600 console
restart: always

View File

@@ -1 +1,2 @@
exec: depmod -a
oneshot: true

View File

@@ -0,0 +1,2 @@
exec: /sbin/agetty --noclear -a root console linux
restart: always

View File

@@ -0,0 +1,2 @@
exec: /sbin/agetty --noclear -a root tty2 vt100
restart: always

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty -L 115200 ttyS0 vt100
restart: always

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty console
restart: always

View File

@@ -1,2 +1,5 @@
exec: haveged -w 1024 -d 32 -i 32 -v 1
oneshot: true
after:
- shm

View File

@@ -0,0 +1,101 @@
#!/bin/sh
# rfs mount firmware flist over /lib/firmware (plain S3 route inside the .fl)
# Looks for firmware-latest.fl in known locations; can be overridden via FIRMWARE_FLIST env
set -eu
log() { echo "[rfs-firmware] $*"; }
RFS_BIN="${RFS_BIN:-rfs}"
TARGET="/lib/firmware"
# Accept both FLISTS_BASE_URL and FLIST_BASE_URL (alias); prefer FLISTS_BASE_URL
if [ -n "${FLISTS_BASE_URL:-}" ]; then
BASE_URL="${FLISTS_BASE_URL}"
elif [ -n "${FLIST_BASE_URL:-}" ]; then
BASE_URL="${FLIST_BASE_URL}"
else
BASE_URL="https://zos.grid.tf/store/flists"
fi
# HTTP readiness helper: wait until BASE_URL responds to HTTP(S)
wait_for_http() {
url="$1"
tries="${2:-60}"
delay="${3:-2}"
i=0
while [ "$i" -lt "$tries" ]; do
if command -v wget >/dev/null 2>&1; then
if wget -q --spider "$url"; then
return 0
fi
elif command -v busybox >/dev/null 2>&1; then
if busybox wget -q --spider "$url"; then
return 0
fi
fi
i=$((i+1))
log "waiting for $url (attempt $i/$tries)"
sleep "$delay"
done
return 1
}
# Allow override via env
if [ -n "${FIRMWARE_FLIST:-}" ] && [ -f "${FIRMWARE_FLIST}" ]; then
FL="${FIRMWARE_FLIST}"
else
# Candidate paths for the flist manifest
for p in \
/etc/rfs/firmware-latest.fl \
/var/lib/rfs/firmware-latest.fl \
/root/firmware-latest.fl \
/firmware-latest.fl \
; do
if [ -f "$p" ]; then
FL="$p"
break
fi
done
fi
if [ -z "${FL:-}" ]; then
# Try remote fetch as a fallback (but first ensure BASE_URL is reachable)
mkdir -p /etc/rfs
FL="/etc/rfs/firmware-latest.fl"
URL="${BASE_URL%/}/firmware-latest.fl"
log "firmware-latest.fl not found locally; will fetch ${URL}"
# Probe BASE_URL root so DNS/HTTP are ready before fetch
ROOT_URL="${BASE_URL%/}/"
if ! wait_for_http "$ROOT_URL" 60 2; then
log "BASE_URL not reachable yet: ${ROOT_URL}; continuing to attempt fetch anyway"
fi
log "fetching ${URL}"
if command -v wget >/dev/null 2>&1; then
wget -q -O "${FL}" "${URL}" || true
elif command -v busybox >/dev/null 2>&1; then
busybox wget -q -O "${FL}" "${URL}" || true
else
log "no wget available to fetch ${URL}"
fi
if [ ! -s "${FL}" ]; then
log "failed to fetch ${URL}; skipping mount"
exit 0
fi
fi
# Ensure target directory exists
mkdir -p "$TARGET"
# Skip if already mounted
if mountpoint -q "$TARGET" 2>/dev/null; then
log "already mounted: $TARGET"
exit 0
fi
# Perform the mount
log "mounting ${FL} -> ${TARGET}"
exec "$RFS_BIN" mount -m "$FL" "$TARGET"

View File

@@ -3,7 +3,17 @@
modprobe fuse
modprobe btrfs
modprobe tun
modprobe br_netfilter
modprobe
modprobe usbhid
modprobe hid_generic
modprobe hid
modprobe atkbd
modprobe libps2
modprobe i2c_smbus
modprobe serio
modprobe i8042i
echo never > /sys/kernel/mm/transparent_hugepage/enabled

View File

@@ -0,0 +1,101 @@
#!/bin/sh
# rfs mount modules flist over /lib/modules/$(uname -r) (plain S3 route embedded in the .fl)
# Looks for modules-$(uname -r).fl in known locations; can be overridden via MODULES_FLIST env.
set -eu
log() { echo "[rfs-modules] $*"; }
RFS_BIN="${RFS_BIN:-rfs}"
KVER="$(uname -r)"
TARGET="/lib/modules/${KVER}"
# Accept both FLISTS_BASE_URL and FLIST_BASE_URL (alias); prefer FLISTS_BASE_URL
if [ -n "${FLISTS_BASE_URL:-}" ]; then
BASE_URL="${FLISTS_BASE_URL}"
elif [ -n "${FLIST_BASE_URL:-}" ]; then
BASE_URL="${FLIST_BASE_URL}"
else
BASE_URL="https://zos.grid.tf/store/flists"
fi
# HTTP readiness helper: wait until BASE_URL responds to HTTP(S)
wait_for_http() {
url="$1"
tries="${2:-60}"
delay="${3:-2}"
i=0
while [ "$i" -lt "$tries" ]; do
if command -v wget >/dev/null 2>&1; then
if wget -q --spider "$url"; then
return 0
fi
elif command -v busybox >/dev/null 2>&1; then
if busybox wget -q --spider "$url"; then
return 0
fi
fi
i=$((i+1))
log "waiting for $url (attempt $i/$tries)"
sleep "$delay"
done
return 1
}
# Allow override via env
if [ -n "${MODULES_FLIST:-}" ] && [ -f "${MODULES_FLIST}" ]; then
FL="${MODULES_FLIST}"
else
# Candidate paths for the flist manifest
for p in \
"/etc/rfs/modules-${KVER}.fl" \
"/var/lib/rfs/modules-${KVER}.fl" \
"/root/modules-${KVER}.fl" \
"/modules-${KVER}.fl" \
; do
if [ -f "$p" ]; then
FL="$p"
break
fi
done
fi
if [ -z "${FL:-}" ]; then
# Try remote fetch as a fallback (modules-<uname -r>-Zero-OS.fl)
mkdir -p /etc/rfs
FL="/etc/rfs/modules-${KVER}.fl"
URL="${BASE_URL%/}/modules-${KVER}-Zero-OS.fl"
log "modules-${KVER}.fl not found locally; will fetch ${URL}"
# Probe BASE_URL root so DNS/HTTP are ready before fetch
ROOT_URL="${BASE_URL%/}/"
if ! wait_for_http "$ROOT_URL" 60 2; then
log "BASE_URL not reachable yet: ${ROOT_URL}; continuing to attempt fetch anyway"
fi
log "fetching ${URL}"
if command -v wget >/dev/null 2>&1; then
wget -q -O "${FL}" "${URL}" || true
elif command -v busybox >/dev/null 2>&1; then
busybox wget -q -O "${FL}" "${URL}" || true
else
log "no wget available to fetch ${URL}"
fi
if [ ! -s "${FL}" ]; then
log "failed to fetch ${URL}; skipping mount"
exit 0
fi
fi
# Ensure target directory exists
mkdir -p "$TARGET"
# Skip if already mounted
if mountpoint -q "$TARGET" 2>/dev/null; then
log "already mounted: $TARGET"
exit 0
fi
# Perform the mount
log "mounting ${FL} -> ${TARGET}"
exec "$RFS_BIN" mount -m "$FL" "$TARGET"

8
config/zinit/init/network.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/bin/sh
set -e
# Ensure dhcpcd user/group exist (some builds expect to drop privileges)
if ! getent group dhcpcd >/dev/null 2>&1; then addgroup -S dhcpcd 2>/dev/null || true; fi
if ! getent passwd dhcpcd >/dev/null 2>&1; then adduser -S -D -s /sbin/nologin -G dhcpcd dhcpcd 2>/dev/null || true; fi
# Exec dhcpcd (will run as root if it cannot drop to dhcpcd user)
interfaces=$(ip -br l | awk '!/lo/&&!/my0/{print $1}')
exec dhcpcd -p -B $interfaces

View File

@@ -1,10 +1,24 @@
#!/bin/sh
set -eu
ntp_flags=$(grep -o 'ntp=.*' /proc/cmdline | sed 's/^ntp=//')
# Ensure /etc/ntp.conf exists for tools/hooks expecting it
if [ ! -e /etc/ntp.conf ] && [ -f /etc/ntpd.conf ]; then
ln -sf /etc/ntpd.conf /etc/ntp.conf
fi
# dhcpcd hook may write into /var/lib/ntp
mkdir -p /var/lib/ntp
# Extract ntp=... from kernel cmdline if present
ntp_flags="$(grep -o 'ntp=[^ ]*' /proc/cmdline 2>/dev/null | sed 's/^ntp=//' || true)"
params=""
if [ -n "$ntp_flags" ]; then
params=$(echo "-p $ntp_flags" | sed s/,/' -p '/g)
if [ -n "${ntp_flags}" ]; then
# Convert comma-separated list into multiple -p args
params="-p $(printf '%s' "${ntp_flags}" | sed 's/,/ -p /g')"
else
# Sensible defaults when no ntp= is provided
params="-p time.google.com -p time1.google.com -p time2.google.com -p time3.google.com"
fi
exec ntpd -n $params
# BusyBox ntpd uses -p servers on CLI; /etc/ntp.conf symlink above helps alternative daemons.
exec ntpd -n ${params}

View File

@@ -0,0 +1,32 @@
#!/bin/bash
# Simple script to create OVS database and start ovsdb-server
# Configuration
DATABASE=${DATABASE:-"/etc/openvswitch/conf.db"}
DBSCHEMA="/usr/share/openvswitch/vswitch.ovsschema"
DB_SOCKET=${DB_SOCKET:-"/var/run/openvswitch/db.sock"}
RUNDIR="/var/run/openvswitch"
# Create run directory
mkdir -p "$RUNDIR"
# Create database if it doesn't exist
if [ ! -e "$DATABASE" ]; then
echo "Creating database: $DATABASE"
ovsdb-tool create "$DATABASE" "$DBSCHEMA"
fi
# Check if database needs conversion
if [ "$(ovsdb-tool needs-conversion "$DATABASE" "$DBSCHEMA")" = "yes" ]; then
echo "Converting database: $DATABASE"
ovsdb-tool convert "$DATABASE" "$DBSCHEMA"
fi
# Start ovsdb-server
echo "Starting ovsdb-server..."
exec ovsdb-server \
--remote=punix:"$DB_SOCKET" \
--remote=db:Open_vSwitch,Open_vSwitch,manager_options \
--pidfile \
--detach \
"$DATABASE"

View File

@@ -1,6 +1,8 @@
exec: /usr/bin/mycelium --key-file /tmp/mycelium_priv_key.bin
exec: /usr/bin/mycelium --key-file /var/cache/etc/mycelium_priv_key.bin
--tun-name my0 --silent --peers tcp://188.40.132.242:9651 tcp://136.243.47.186:9651
tcp://185.69.166.7:9651 tcp://185.69.166.8:9651 tcp://65.21.231.58:9651 tcp://65.109.18.113:9651
tcp://209.159.146.190:9651 tcp://5.78.122.16:9651 tcp://5.223.43.251:9651 tcp://142.93.217.194:9651
after:
- network
- udev-rfs
- zosstorage

View File

@@ -1,5 +1,6 @@
exec: dhcpcd eth0
exec: sh /etc/zinit/init/network.sh
after:
- depmod
- udevd
- udev-trigger
test: ping -c1 www.google.com

View File

@@ -0,0 +1,4 @@
exec: /etc/zinit/init/ovsdb-server.sh
oneshot: true
after:
- zossstorage

View File

@@ -0,0 +1,4 @@
exec: sh /etc/zinit/init/firmware.sh
restart: always
after:
- network

View File

@@ -0,0 +1,4 @@
exec: sh /etc/zinit/init/modules.sh
restart: always
after:
- rfs-firmware

View File

@@ -1,2 +1,4 @@
exec: sh /etc/zinit/init/sshd-setup.sh
oneshot: true
after:
- network

View File

@@ -1,3 +1,4 @@
exec: /usr/sbin/sshd -D -e
after:
- sshd-setup
- haveged

View File

@@ -0,0 +1,4 @@
exec: sh -c "sleep 3 ; /etc/zinit/init/udev.sh"
oneshot: true
after:
- rfs-modules

View File

@@ -0,0 +1,4 @@
exec: /usr/bin/zosstorage -l debug
oneshot: true
after:
- rfs-modules

View File

@@ -1,106 +0,0 @@
#!/bin/sh -x
# Alpine-based Zero-OS Init Script
# Maintains identical flow to original busybox version
echo ""
echo "============================================"
echo "== ZERO-OS ALPINE INITRAMFS =="
echo "============================================"
echo "[+] creating ram filesystem"
target="/mnt/root"
mkdir -p $target
mount -t proc proc /proc
mount -t sysfs sysfs /sys
mount -t tmpfs tmpfs /mnt/root -o size=1024M
mount -t devtmpfs devtmpfs /dev
echo "[+] building ram filesystem"
# Copy Alpine filesystem to tmpfs (same as original)
echo " copying /bin..."
cp -ar /bin $target
echo " copying /etc..."
cp -ar /etc $target
echo " copying /lib..."
cp -ar /lib* $target
echo " copying /usr..."
cp -ar /usr $target
echo " copying /root..."
cp -ar /root $target
echo " copying /sbin..."
cp -ar /sbin $target
echo " copying /tmp..."
cp -ar /tmp $target
echo " copying /var..."
cp -ar /var $target
echo " copying /run..."
cp -ar /run $target
# Create essential directories
mkdir -p $target/dev
mkdir -p $target/sys
mkdir -p $target/proc
mkdir -p $target/mnt
# Mount filesystems in tmpfs
mount -t proc proc $target/proc
mount -t sysfs sysfs $target/sys
mount -t devtmpfs devtmpfs $target/dev
# Mount devpts for terminals
mkdir -p $target/dev/pts
mount -t devpts devpts $target/dev/pts
echo "[+] setting environment"
export PATH
echo "[+] probing drivers"
# Use Alpine's udev instead of busybox udevadm
if [ -x /sbin/udevd ]; then
echo " starting udevd..."
udevd --daemon
echo " triggering device discovery..."
udevadm trigger --action=add --type=subsystems
udevadm trigger --action=add --type=devices
udevadm settle
echo " stopping udevd..."
kill $(pidof udevd) || true
else
echo " warning: udevd not found, skipping hardware detection"
fi
echo "[+] loading essential drivers"
# Load core drivers for storage and network
modprobe btrfs 2>/dev/null || true
modprobe fuse 2>/dev/null || true
modprobe overlay 2>/dev/null || true
# Load storage drivers
modprobe ahci 2>/dev/null || true
modprobe nvme 2>/dev/null || true
modprobe virtio_blk 2>/dev/null || true
modprobe virtio_scsi 2>/dev/null || true
# Load network drivers
modprobe virtio_net 2>/dev/null || true
modprobe e1000 2>/dev/null || true
modprobe e1000e 2>/dev/null || true
# Unmount init filesystems
umount /proc 2>/dev/null || true
umount /sys 2>/dev/null || true
echo "[+] checking for debug files"
if [ -e /init-debug ]; then
echo " executing debug script..."
sh /init-debug
fi
echo "[+] switching root"
echo " exec switch_root /mnt/root /sbin/zinit init"
exec switch_root /mnt/root /sbin/zinit -d init
##

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,59 +0,0 @@
# Essential kernel modules for Zero-OS Alpine initramfs
# This file contains a curated list of essential modules for network and storage functionality
# Comments are supported (lines starting with #)
# Network drivers - Intel
e1000
e1000e
igb
ixgbe
i40e
ice
# Network drivers - Realtek
r8169
8139too
8139cp
# Network drivers - Broadcom
bnx2
bnx2x
tg3
b44
# Network drivers - Atheros
atl1
atl1e
atl1c
alx
# VirtIO drivers
virtio_net
virtio_scsi
virtio_blk
virtio_pci
# Tunnel and container support
tun
overlay
# Storage subsystem (essential only)
scsi_mod
sd_mod
# Control Groups (cgroups v1 and v2) - essential for container management
cgroup_pids
cgroup_freezer
cgroup_perf_event
cgroup_device
cgroup_cpuset
cgroup_bpf
cgroup_debug
memcg
blkio_cgroup
cpu_cgroup
cpuacct
hugetlb_cgroup
net_cls_cgroup
net_prio_cgroup
devices_cgroup

View File

@@ -1,46 +0,0 @@
# MINIMAL Alpine packages for Zero-OS embedded initramfs
# Target: ~50MB total (not 700MB!)
# Core system (essential only)
alpine-baselayout
busybox
musl
# Module loading & hardware detection
eudev
eudev-hwids
eudev-libs
eudev-netifnames
kmod
# Console/terminal management
util-linux
# Essential networking (for Zero-OS connectivity)
iproute2
ethtool
# Filesystem support (minimal)
btrfs-progs
dosfstools
# Essential libraries only
zlib
# Network utilities (minimal)
dhcpcd
tcpdump
bmon
# Random number generation (for crypto/security)
haveged
# SSH access and terminal multiplexer
openssh-server
zellij
# Essential debugging and monitoring tools included
# NO development tools, NO curl/wget, NO python, NO redis
# NO massive linux-firmware package
# Other tools will be loaded from RFS after network connectivity

View File

@@ -1,10 +0,0 @@
# sources.conf - Components to download and build for initramfs
# Format: TYPE:NAME:URL:VERSION:BUILD_FUNCTION[:EXTRA_OPTIONS]
# Git repositories to clone and build
git:zinit:https://github.com/threefoldtech/zinit:master:build_zinit
git:mycelium:https://github.com/threefoldtech/mycelium:0.6.1:build_mycelium
git:rfs:https://github.com/threefoldtech/rfs:development:build_rfs
# Pre-built releases to download
release:corex:https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static:2.1.4:install_corex:rename=corex

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/cgroup.sh
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty -L 9600 console
restart: always

View File

@@ -1 +0,0 @@
exec: depmod -a

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty -L 115200 ttyS0 vt100
restart: always

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty console linux
restart: always

View File

@@ -1,2 +0,0 @@
exec: haveged -w 1024 -d 32 -i 32 -v 1
oneshot: true

View File

@@ -1,6 +0,0 @@
#!/bin/bash
echo "start ash terminal"
while true; do
getty -l /bin/ash -n 19200 tty2
done

View File

@@ -1,10 +0,0 @@
set -x
mount -t tmpfs cgroup_root /sys/fs/cgroup
subsys="pids cpuset cpu cpuacct blkio memory devices freezer net_cls perf_event net_prio hugetlb"
for sys in $subsys; do
mkdir -p /sys/fs/cgroup/$sys
mount -t cgroup $sys -o $sys /sys/fs/cgroup/$sys/
done

View File

@@ -1,10 +0,0 @@
#!/bin/bash
modprobe fuse
modprobe btrfs
modprobe tun
modprobe br_netfilter
echo never > /sys/kernel/mm/transparent_hugepage/enabled
ulimit -n 524288

View File

@@ -1,10 +0,0 @@
#!/bin/sh
ntp_flags=$(grep -o 'ntp=.*' /proc/cmdline | sed 's/^ntp=//')
params=""
if [ -n "$ntp_flags" ]; then
params=$(echo "-p $ntp_flags" | sed s/,/' -p '/g)
fi
exec ntpd -n $params

View File

@@ -1,4 +0,0 @@
#!/bin/bash
echo "Enable ip forwarding"
echo 1 > /proc/sys/net/ipv4/ip_forward

View File

@@ -1,3 +0,0 @@
#!/bin/sh
mkdir /dev/shm
mount -t tmpfs shm /dev/shm

View File

@@ -1,15 +0,0 @@
#!/bin/ash
if [ -f /etc/ssh/ssh_host_rsa_key ]; then
# ensure existing file permissions
chown root:root /etc/ssh/ssh_host_*
chmod 600 /etc/ssh/ssh_host_*
exit 0
fi
echo "Setting up sshd"
mkdir -p /run/sshd
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
ssh-keygen -f /etc/ssh/ssh_host_ecdsa_key -N '' -t ecdsa -b 521
ssh-keygen -f /etc/ssh/ssh_host_ed25519_key -N '' -t ed25519

View File

@@ -1,4 +0,0 @@
#!/bin/sh
udevadm trigger --action=add
udevadm settle

View File

@@ -1,2 +0,0 @@
exec: ip l set lo up
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/modprobe.sh
oneshot: true

View File

@@ -1,6 +0,0 @@
exec: /usr/bin/mycelium --key-file /tmp/mycelium_priv_key.bin
--tun-name my0 --silent --peers tcp://188.40.132.242:9651 tcp://136.243.47.186:9651
tcp://185.69.166.7:9651 tcp://185.69.166.8:9651 tcp://65.21.231.58:9651 tcp://65.109.18.113:9651
tcp://209.159.146.190:9651 tcp://5.78.122.16:9651 tcp://5.223.43.251:9651 tcp://142.93.217.194:9651
after:
- network

View File

@@ -1,5 +0,0 @@
exec: dhcpcd eth0
after:
- depmod
- udevd
- udev-trigger

View File

@@ -1,3 +0,0 @@
exec: sh /etc/zinit/init/ntpd.sh
after:
- network

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/routing.sh
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: /etc/zinit/init/shm.sh
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/sshd-setup.sh
oneshot: true

View File

@@ -1,3 +0,0 @@
exec: /usr/sbin/sshd -D -e
after:
- sshd-setup

View File

@@ -1,6 +0,0 @@
exec: sh /etc/zinit/init/udev.sh
oneshot: true
after:
- depmod
- udevmon
- udevd

View File

@@ -1 +0,0 @@
exec: udevd

View File

@@ -1 +0,0 @@
exec: udevadm monitor

286
docs/NOTES.md Normal file
View File

@@ -0,0 +1,286 @@
Zero-OS Builder Working Notes and Repository Map
Purpose
- This document captures operational knowledge of this repository: build flow, key files, flags, and recent behavior decisions (e.g., passwordless root).
- Links below point to exact functions and files for fast triage, using code navigation-friendly anchors.
Repository Overview
- Build entrypoint: [scripts/build.sh](scripts/build.sh)
- Orchestrates incremental stages using stage markers.
- Runs inside a container defined by [Dockerfile](Dockerfile) for reproducibility.
- Common utilities and config loading: [bash.common.sh](scripts/lib/common.sh:1)
- Loads [config/build.conf](config/build.conf), normalizes directory paths, provides logging and safe execution wrappers.
- Initramfs assembly and finalization: [bash.initramfs_* functions](scripts/lib/initramfs.sh:1)
- Copies components, sets up zinit configs, finalizes branding, creates CPIO archive, validates contents.
- Kernel integration (optional embedded initramfs): [bash.kernel_* functions](scripts/lib/kernel.sh:1)
- Downloads/configures/builds kernel and modules, embeds initramfs, runs depmod.
- zinit configuration: [config/zinit/](config/zinit/)
- YAML service definitions and init scripts used by zinit inside the initramfs rootfs.
- RFS tooling (modules/firmware flists): [scripts/rfs/](scripts/rfs/)
- Packs module/firmware flists and embeds them into initramfs at /etc/rfs.
Container Tooling (dev-container)
- Base image: Alpine 3.22 in [Dockerfile](Dockerfile:1)
- Tools:
- shadow (passwd/chpasswd): required for root password management in initramfs.
- openssl, openssl-dev: kept for other build steps and potential hashing utilities.
- build-base, rustup, kmod, upx, etc.: required by various build stages.
- Removed: perl, not required for password handling after switching to passwd/chpasswd workflow.
Configuration build.conf
- File: [config/build.conf](config/build.conf)
- Key variables:
- Versions: ALPINE_VERSION, KERNEL_VERSION
- Directories (relative in config, normalized to absolute during runtime):
- INSTALL_DIR="initramfs"
- COMPONENTS_DIR="components"
- KERNEL_DIR="kernel"
- DIST_DIR="dist"
- Flags:
- ZEROOS_BRANDING="true"
- ZEROOS_REBRANDING="true"
- Branding behavior:
- ZEROOS_PASSWORDLESS_ROOT="true" (default for branded builds in current policy)
- ZEROOS_ROOT_PASSWORD_HASH / ROOT_PASSWORD_HASH (not used in current policy)
- ZEROOS_ROOT_PASSWORD / ROOT_PASSWORD (not used in current policy)
- FIRMWARE_TAG optional for reproducible firmware flist naming.
Absolute Path Normalization
- Location: [bash.common.sh](scripts/lib/common.sh:236)
- After sourcing build.conf, the following variables are normalized to absolute paths anchored at PROJECT_ROOT:
- INSTALL_DIR, COMPONENTS_DIR, KERNEL_DIR, DIST_DIR
- Rationale: Prevents path resolution errors when CWD changes (e.g., when kernel build operates in /workspace/kernel/current, validation now resolves to /workspace/initramfs instead of /workspace/kernel/current/initramfs).
Build Pipeline High Level
- Orchestrator: [bash.main_build_process()](scripts/build.sh:214)
- Stage list:
- alpine_extract
- alpine_configure
- alpine_packages
- alpine_firmware
- components_build
- components_verify
- kernel_modules
- init_script
- components_copy
- zinit_setup
- modules_setup
- modules_copy
- cleanup
- rfs_flists
- validation
- initramfs_create
- initramfs_test
- kernel_build
- boot_tests
- Each stage wrapped with [bash.stage_run()](scripts/lib/stages.sh:99) and tracked under .build-stages/
- Container use:
- Always run in container for stable toolchain (podman/docker auto-detected).
- Inside container, CWD normalized to PROJECT_ROOT.
Initramfs Assembly Key Functions
- zinit setup: [bash.initramfs_setup_zinit()](scripts/lib/initramfs.sh:12)
- Copies [config/zinit](config/zinit/) into /etc/zinit, fixes perms, removes OpenRC remnants.
- Install init script: [bash.initramfs_install_init_script()](scripts/lib/initramfs.sh:74)
- Installs [config/init](config/init) as /init in initramfs root.
- Components copy: [bash.initramfs_copy_components()](scripts/lib/initramfs.sh:101)
- Installs built components (zinit/rfs/mycelium/corex/zosstorage) into proper locations, strips/UPX where applicable.
- Modules setup: [bash.initramfs_setup_modules()](scripts/lib/initramfs.sh:229)
- Reads [config/modules.conf](config/modules.conf), resolves deps via [bash.initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:318), generates stage1 list (firmware hints in modules.conf are ignored; firmware.conf is authoritative).
- Create module scripts: [bash.initramfs_create_module_scripts()](scripts/lib/initramfs.sh:427)
- Writes /etc/zinit/init/stage1-modules.sh and stage2-modules.sh for zinit to load modules.
- Binary size optimization: [bash.initramfs_strip_and_upx()](scripts/lib/initramfs.sh:491)
- Final customization: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- See “Branding behavior” below.
- Create archive: [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688)
- Calls finalize, runs sanity checks, and creates initramfs.cpio.xz.
- Validate: [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
- Ensures essential items exist, logs debug info:
- Prints “Validation debug:” lines showing input, PWD, PROJECT_ROOT, INSTALL_DIR, and resolved path.
Kernel Integration
- Get full kernel version: [bash.kernel_get_full_version()](scripts/lib/kernel.sh:13)
- Apply config (embed initramfs): [bash.kernel_apply_config()](scripts/lib/kernel.sh:81)
- Updates CONFIG_INITRAMFS_SOURCE to the archives absolute path via [bash.kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:130).
- Build kernel: [bash.kernel_build_with_initramfs()](scripts/lib/kernel.sh:173)
- Build and install modules in container: [bash.kernel_build_modules()](scripts/lib/kernel.sh:228)
- Installs modules to /lib/modules/$FULL_VERSION in container, runs depmod -a.
RFS Flists (modules/firmware)
- Packing scripts:
- Modules: [bash.pack-modules.sh](scripts/rfs/pack-modules.sh:1)
- Firmware: [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
- Firmware policy:
- For initramfs: [config/firmware.conf](config/firmware.conf) is the single source of truth for preinstalled firmware; modules.conf hints are ignored.
- For RFS: install all Alpine linux-firmware* packages into the build container and pack from /lib/firmware (full set for runtime).
- Integrated in stage_rfs_flists:
- Embeds /etc/rfs/modules-<FULL_KERNEL_VERSION>.fl
- Embeds /etc/rfs/firmware-latest.fl (or tagged by FIRMWARE_TAG)
- See [bash.main_build_process() — stage_rfs_flists](scripts/build.sh:298)
- Runtime mount/readiness:
- Firmware flist mounts over /lib/firmware (overmount hides any initramfs firmware).
- Modules flist mounts at /lib/modules/$(uname -r).
- Init scripts probe BASE_URL reachability (accepts FLISTS_BASE_URL or FLIST_BASE_URL) and wait for HTTP(S) before fetching:
- Firmware: [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- Modules: [sh.modules.sh](config/zinit/init/modules.sh:1)
Branding Behavior (Passwordless Root, motd/issue)
- Finalization hook: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Behavior (current policy):
- Passwordless root enforced using passwd for shadow-aware deletion:
- [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:616):
- passwd -d -R "${initramfs_dir}" root
- Ensures /etc/shadow has root:: (empty password) inside initramfs root, not host.
- Branding toggles: ZEROOS_BRANDING and ZEROOS_REBRANDING (branding guard printed in logs).
- Branding also updates /etc/motd and /etc/issue to Zero-OS.
Console and getty
- Early keyboard and debug:
- [config/init](config/init) preloads input/HID and USB HCD modules (i8042, atkbd, usbhid, hid, hid_generic, evdev, xhci/ehci/ohci/uhci) so console input works before zinit/rfs.
- Kernel cmdline initdebug=true opens an early interactive shell; if /init-debug exists and is executable, it runs preferentially.
- Serial and console getty configs (zinit service YAML):
- [config/zinit/getty-tty1.yaml](config/zinit/getty-tty1.yaml)
- [config/zinit/getty-console.yaml](config/zinit/getty-console.yaml)
- Optional ash login loop (not enabled unless referenced):
- [bash.ashloging.sh](config/zinit/init/ashloging.sh:1)
Validation Diagnostics and Triage
- Common error previously observed:
- “Initramfs directory not found: initramfs (resolved: /workspace/kernel/current/initramfs)”
- Root cause:
- INSTALL_DIR re-sourced in a different CWD and interpreted as relative.
- Fix:
- Absolute path normalization of INSTALL_DIR/COMPONENTS_DIR/KERNEL_DIR/DIST_DIR after sourcing build.conf in [bash.common.sh](scripts/lib/common.sh:236).
- Additional “Validation debug” prints added in [bash.initramfs_validate()](scripts/lib/initramfs.sh:799).
- Expected logs now:
- “Validation debug: input='initramfs' PWD=/workspace PROJECT_ROOT=/workspace INSTALL_DIR=/workspace/initramfs”
- Resolves correctly even if called from a different stage CWD.
How to Verify Passwordless Root
- After build, check archive:
- mkdir -p dist/_inspect && cd dist/_inspect
- xz -dc ../initramfs.cpio.xz | cpio -idmv
- grep '^root:' ./etc/shadow
- Expect root:: (empty field) indicating passwordless root.
- At runtime on console:
- When prompted for roots password, press Enter.
Stage System and Incremental Rebuilds
- Stage markers stored in .build-stages/ (one file per stage).
- Minimal rebuild helper (host or container):
- [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh) clears only: modules_setup, modules_copy, init_script, zinit_setup, validation, initramfs_create, initramfs_test (kernel_build only with --with-kernel; kernel_modules only with --refresh-container-mods).
- Flags:
- --with-kernel (also rebuild kernel; ensures cpio is recreated right before embedding)
- --refresh-container-mods (rebuild container /lib/modules for fresh containers)
- --verify-only (report changed files and stage status; no rebuild)
- Shows stage status before/after marker removal; no --rebuild-from is passed by default (relies on markers only).
- Manual minimal rebuild:
- Remove relevant .done files, e.g.: initramfs_create.done initramfs_test.done validation.done
- Rerun: DEBUG=1 ./scripts/build.sh
- Show status:
- ./scripts/build.sh --show-stages
- Test built kernel:
- ./runit.sh --hypervisor qemu
- ./runit.sh --hypervisor ch --disks 5 --reset
Key Decisions (current)
- Firmware selection for initramfs comes exclusively from [config/firmware.conf](config/firmware.conf); firmware hints in modules.conf are ignored to avoid duplication/mismatch.
- Runtime firmware flist overmounts /lib/firmware after network readiness; init scripts wait for FLISTS_BASE_URL/FLIST_BASE_URL HTTP reachability before fetching.
- Early keyboard and debug shell added to [config/init](config/init) as described above.
- Branding enforces passwordless root via passwd -d -R inside initramfs finalization, avoiding direct edits of passwd/shadow files.
- Directory paths normalized to absolute after loading config to avoid CWD-sensitive behavior.
- Container image contains shadow suite to ensure passwd/chpasswd availability; perl removed.
File Pointers (quick jump)
- Orchestrator: [scripts/build.sh](scripts/build.sh)
- Common and config loading: [bash.common.sh](scripts/lib/common.sh:1)
- Finalization hook: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Validation entry: [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
- CPIO creation: [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688)
- Kernel embed config: [bash.kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:130)
- RFS packers: [bash.pack-modules.sh](scripts/rfs/pack-modules.sh:1), [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
- USB creator: [scripts/make-grub-usb.sh](scripts/make-grub-usb.sh)
Change Log
- 2025-09-09:
- Enforce passwordless root using passwd -d -R in finalization.
- Normalize INSTALL_DIR/COMPONENTS_DIR/KERNEL_DIR/DIST_DIR to absolute paths post-config load.
- Add validation diagnostics prints (input/PWD/PROJECT_ROOT/INSTALL_DIR/resolved).
- Ensure shadow package in container for passwd/chpasswd; keep openssl and openssl-dev; remove perl earlier.
- 2025-10-10:
- Record zosstorage as a standard component installed during [bash.initramfs_copy_components()](scripts/lib/initramfs.sh:101) and sync component documentation references.
Updates 2025-10-01
- Function index regenerated: see [scripts/functionlist.md](scripts/functionlist.md) for an authoritative map of all functions with current line numbers. Use it alongside the quick links below to jump into code fast.
- Key jump-points (current lines):
- Finalization: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:568)
- CPIO creation: [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:691)
- Validation: [bash.initramfs_validate()](scripts/lib/initramfs.sh:820)
- Kernel embed config: [bash.kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:130)
- Stage orchestrator entry: [bash.main_build_process()](scripts/build.sh:214)
- Repo-wide index: [scripts/functionlist.md](scripts/functionlist.md)
Roadmap / TODO (tracked in tool todo list)
- Zosception (zinit service graph and ordering)
- Define additional services and ordering for nested/recursive orchestration.
- Likely integration points:
- Networking readiness before RFS: [config/zinit/network.yaml](config/zinit/network.yaml)
- Early udev coldplug: [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml)
- Post-RFS coldplug: [config/zinit/udev-rfs.yaml](config/zinit/udev-rfs.yaml)
- Ensure dependency edges are correct in the service DAG image (see docs/img_*.png).
- Add zosstorage to initramfs
- Source:
- If packaged: add to [config/packages.list](config/packages.list).
- If built from source: extend [bash.components_parse_sources_conf()](scripts/lib/components.sh:13) and add a build_* function; install via [bash.initramfs_copy_components()](scripts/lib/initramfs.sh:102).
- Zinit unit:
- Add YAML under [config/zinit/](config/zinit/) and hook into the network-ready path.
- Ordering:
- Start after "network" and before/with RFS mounts if it provides storage functionality used by rfs.
- RFS blob store backends (design + docs; http and s3 exist)
- Current S3 store URI construction: [bash.rfs_common_build_s3_store_uri()](scripts/rfs/common.sh:137)
- Flist manifest store patching: [bash.rfs_common_patch_flist_stores()](scripts/rfs/common.sh:385)
- Route URL patching: [bash.rfs_common_patch_flist_route_url()](scripts/rfs/common.sh:494)
- Packers entrypoints:
- [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh:1)
- [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
- Proposed additional backend: RESP/DB-style store
- Goal: Allow rfs to push/fetch content-addressed blobs via a RESP-compatible endpoint (e.g., Redis/KeyDB/Dragonfly-like), or a thin HTTP/RESP adapter.
- Draft URI scheme examples:
- resp://host:port/db?tls=0&amp;prefix=blobs
- resp+tls://host:port/db?prefix=blobs&amp;ca=/etc/ssl/certs/ca.pem
- resp+sentinel://sentinelHost:26379/mymaster?prefix=blobs
- Minimum operations:
- PUT blob: SETEX prefix/ab/cd/hash ttl file-bytes or HSET prefix/hash data file-bytes
- GET blob: GET or HGET
- HEAD/exists: EXISTS
- Optional: pipelined/mget for batch prefetch
- Client integration layers:
- Pack-time: extend rfs CLI store resolver (design doc first; scripts/rfs/common.sh can map scheme→uploader if CLI not ready).
- Manifest post-process: still supported; stores table may include multiple URIs (s3 + resp) for redundancy.
- Caching and retries:
- Local on-disk cache under dist/.rfs-cache keyed by hash with LRU GC.
- Exponential backoff on GET failures; fall back across stores in order.
- Auth:
- RESP: optional username/password in URI; TLS with cert pinning parameters.
- Keep secrets in config/rfs.conf or env; do not embed write creds in manifests (read-credential routes only).
- Deliverables:
- Design section in docs/rfs-flists.md (to be added)
- Config keys in config/rfs.conf.example for RESP endpoints
- Optional shim uploader script if CLI support lags.
- Documentation refresh tasks
- Cross-check this files clickable references against [scripts/functionlist.md](scripts/functionlist.md) after changes in lib files.
- Keep “Branding behavior” and “Absolute Path Normalization” pointers aligned with:
- [bash.common.sh normalization](scripts/lib/common.sh:244)
- [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:568)
Diagnostics-first reminder
- Use DEBUG=1 and stage markers for minimal rebuilds.
- Quick commands:
- Show stages: ./scripts/build.sh --show-stages
- Minimal rebuild after zinit/init edits: [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh)
- Validate archive: [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:691), then [bash.initramfs_test_archive()](scripts/lib/initramfs.sh:953)

202
docs/PROMPT.md Normal file
View File

@@ -0,0 +1,202 @@
You are Kilo Code, an expert software debugger for this Zero-OS Alpine Initramfs Builder repository.
Mission
- Be a precise, surgical debugger and maintainer for this repo.
- Default to minimal-change fixes with explicit logging to validate assumptions.
- For any suspected root cause, add diagnostics first, confirm in logs, then implement the fix.
- Always show function-and-file context via clickable references like [bash.some_function()](scripts/file.sh:123).
- Use the staged, incremental build pipeline; never rebuild more than necessary.
Repository map (jump-points)
- Build entrypoint and stages:
- [scripts/build.sh](scripts/build.sh)
- Orchestrator main: [bash.main_build_process()](scripts/build.sh:214)
- Kernel build stage wrapper: [bash.stage_kernel_build()](scripts/build.sh:398)
- Initramfs create stage: [bash.stage_initramfs_create()](scripts/build.sh:374)
- Initramfs test stage: [bash.stage_initramfs_test()](scripts/build.sh:385)
- Stages infra: [bash.stage_run()](scripts/lib/stages.sh:99), [scripts/lib/stages.sh](scripts/lib/stages.sh)
- Common utilities and config:
- Config load, logging, path normalization: [bash.common.sh](scripts/lib/common.sh:1)
- Absolute path normalization for INSTALL_DIR, COMPONENTS_DIR, KERNEL_DIR, DIST_DIR: [bash.common.sh](scripts/lib/common.sh:236)
- Initramfs assembly:
- All initramfs functions: [scripts/lib/initramfs.sh](scripts/lib/initramfs.sh)
- Final customization hook (branding, dirs, ntp): [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Create archive (pre-CPIO checks, call finalize): [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688)
- Validate contents (with new diagnostics): [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
- Kernel integration:
- Kernel helpers: [scripts/lib/kernel.sh](scripts/lib/kernel.sh)
- Embed initramfs in config: [bash.kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:130)
- Zinit config and init scripts (inside initramfs):
- zinit YAML/services: [config/zinit/](config/zinit/)
- Modules mount script: [sh.modules.sh](config/zinit/init/modules.sh:1)
- Firmware mount script: [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- Network, ntpd, etc: [config/zinit/init/](config/zinit/init/)
- RFS flists tooling:
- Modules packer: [bash.pack-modules.sh](scripts/rfs/pack-modules.sh:1)
- Firmware packer: [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
- Boot media utility:
- GRUB USB creator: [scripts/make-grub-usb.sh](scripts/make-grub-usb.sh)
High-priority behaviors and policies
1) Branding passwordless root (shadow-aware)
- Implemented in finalize via passwd (no manual file edits):
- [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Deletes root password inside initramfs using chroot: chroot ${initramfs_dir} passwd -d root
- This ensures /etc/shadow has root:: and console login is passwordless when branding is enabled.
2) Path normalization (prevents “resolved under kernel/current” errors)
- After loading [config/build.conf](config/build.conf), key directories are normalized to absolute paths:
- [bash.common.sh](scripts/lib/common.sh:236)
- Prevents validation resolving INSTALL_DIR relative to CWD (e.g., /workspace/kernel/current/initramfs).
3) Initramfs essential directories guarantee
- During finalize, enforce presence of essential dirs including /home:
- [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Pre-CPIO essential check includes “home”:
- [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:680)
4) Remote flist fallback + readiness (modules + firmware)
- When local manifests are missing, fetch from zos.grid.tf and mount via rfs:
- Firmware: [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- BASE_URL from FLISTS_BASE_URL (or FLIST_BASE_URL alias), default https://zos.grid.tf/store/flists
- Probes BASE_URL for HTTP(S) readiness (wget --spider) before fetching firmware-latest.fl
- Fetch path: ${BASE_URL%/}/firmware-latest.fl to /etc/rfs/firmware-latest.fl
- Modules: [sh.modules.sh](config/zinit/init/modules.sh:1)
- BASE_URL from FLISTS_BASE_URL (or FLIST_BASE_URL alias)
- Probes BASE_URL for HTTP(S) readiness before fetching modules-$(uname -r)-Zero-OS.fl
- Env overrides:
- FIRMWARE_FLIST, MODULES_FLIST: use local file if provided
- RFS_BIN: defaults to rfs
- FLISTS_BASE_URL or FLIST_BASE_URL: override base URL
- wget is available (initramfs includes it); scripts prefer wget, fallback to busybox wget if needed.
5) Incremental build guards
- Kernel build now defaults INITRAMFS_ARCHIVE if unset (fix for unbound var on incremental runs):
- [bash.stage_kernel_build()](scripts/build.sh:398)
- Initramfs test stage already guards INITRAMFS_ARCHIVE:
- [bash.stage_initramfs_test()](scripts/build.sh:385)
6) Early keyboard input and debug shell
- Early HID/input and USB HCD modules are preloaded before zinit to ensure console usability:
- [config.init](config/init:80)
- Debug hook: kernel cmdline initdebug=true runs /init-debug if present or drops to a shell:
- [config.init](config/init:115)
Flags and config
- Config file: [config/build.conf](config/build.conf)
- Branding flags:
- ZEROOS_BRANDING="true"
- ZEROOS_REBRANDING="true"
- Branding password behavior:
- ZEROOS_PASSWORDLESS_ROOT="true" (current default behavior for branded builds)
- If switching to password-based later (not current policy), prefer using chpasswd with -R (requires minimal re-enable).
- Directories (relative in config, normalized to abs at runtime):
- INSTALL_DIR="initramfs"
- COMPONENTS_DIR="components"
- KERNEL_DIR="kernel"
- DIST_DIR="dist"
- Firmware policies:
- Initramfs: [config/firmware.conf](config/firmware.conf) is authoritative; modules.conf firmware hints are ignored.
- RFS: full Alpine firmware set is installed into container and packed from /lib/firmware (see [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)).
- Firmware flist naming tag:
- FIRMWARE_TAG (env > config > “latest”)
- Container image tools (podman rootless OK) defined by [Dockerfile](Dockerfile):
- Key packages: shadow (passwd/chpasswd), openssl, openssl-dev, build-base, rustup, kmod, upx, wget, etc.
Diagnostics-first workflow (strict)
- For any failure, first collect specific logs:
- Enable DEBUG=1 for verbose logs.
- Re-run only the impacted stage if possible:
- Example: rm -f .build-stages/validation.done && DEBUG=1 ./scripts/build.sh
- Use existing diagnostics:
- Branding debug lines: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Validation debug lines (input, PWD, PROJECT_ROOT, INSTALL_DIR, resolved): [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
- Pre-CPIO sanity listing and essential checks: [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688)
- Only after validation confirms the hypothesis, apply the minimal fix.
Common tasks and commands
- Full incremental build (always inside container):
- DEBUG=1 ./scripts/build.sh
- Minimal rebuild of last steps:
- rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh
- Validation only:
- rm -f .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh
- Show stage status:
- ./scripts/build.sh --show-stages
- Test built kernel:
- ./runit.sh --hypervisor qemu
- ./runit.sh --hypervisor ch --disks 5
Checklists and helpers
A) Diagnose “passwordless root not working”
- Confirm branding flags are loaded:
- Check “Branding debug:” lines in logs from [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Confirm /etc/shadow was updated in initramfs:
- Extract dist/initramfs.cpio.xz to a temp dir and grep '^root:' etc/shadow; expect root::
- If not present:
- Ensure passwd is available in container (comes from shadow package): [Dockerfile](Dockerfile)
- Check we use chroot ${initramfs_dir} passwd -d root (not --root or direct file edits): [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:592)
B) Fix “Initramfs directory not found: initramfs (resolved: /workspace/kernel/current/initramfs)”
- Confirm absolute path normalization after config load:
- [bash.common.sh](scripts/lib/common.sh:236)
- Confirm validation prints “Validation debug:” with resolved absolute path:
- [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
C) Minimal rebuild after zinit/init/modules.conf changes
- Use the helper (works from host or container):
- scripts/rebuild-after-zinit.sh
- Defaults: initramfs-only; clears only modules_setup, modules_copy, init_script, zinit_setup, validation, initramfs_create, initramfs_test
- Flags:
- --with-kernel: also rebuild kernel; cpio is recreated immediately before embedding
- --refresh-container-mods: rebuild container /lib/modules for fresh containers
- --verify-only: report changed files and stage status; no rebuild
- Stage status is printed before/after marker removal; the helper avoids --rebuild-from by default to prevent running early stages.
- Manual fallback:
- rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh
D) INITRAMFS_ARCHIVE unbound during kernel build stage
- stage_kernel_build now defaults INITRAMFS_ARCHIVE if unset:
- [bash.stage_kernel_build()](scripts/build.sh:398)
- If error persists, ensure stage_initramfs_create ran or that defaulting logic sees dist/initramfs.cpio.xz.
E) Modules/firmware not found by rfs init scripts
- Confirm local manifests under /etc/rfs or remote fallback:
- Firmware: [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- Modules: [sh.modules.sh](config/zinit/init/modules.sh:1)
- For remote:
- Set FLISTS_BASE_URL or FLIST_BASE_URL; default is https://zos.grid.tf/store/flists
- Scripts probe BASE_URL readiness (wget --spider) before fetch
- Firmware target is /lib/firmware; modules target is /lib/modules/$(uname -r)
- Confirm uname -r matches remote naming “modules-$(uname -r)-Zero-OS.fl”
- Confirm wget present (or busybox wget)
Project conventions
- Edit policy:
- Use minimal, localized changes; keep behavior and structure intact unless necessary.
- Add diagnostics before fixing; confirm in logs.
- Commit policy:
- Write clear, component-scoped messages (e.g., “initramfs: …”, “build: …”, “zinit(init): …”).
- Ask-first policy:
- Ask to confirm diagnoses before full fixes when the outcome is uncertain.
- Provide 23 concrete paths forward.
Key files to keep in sync with behavior decisions
- Branding and finalization: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Validation diagnostics: [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
- Archive creation (pre-CPIO checks): [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688)
- Path normalization after config: [bash.common.sh](scripts/lib/common.sh:236)
- Modules/firmware remote fallback + readiness: [sh.modules.sh](config/zinit/init/modules.sh:1), [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- Kernel stage defaulting for archive: [bash.stage_kernel_build()](scripts/build.sh:398)
- GRUB USB creator: [scripts/make-grub-usb.sh](scripts/make-grub-usb.sh)
- Operational notes: [docs/NOTES.md](docs/NOTES.md)
When in doubt
- Prefer adding logs over guessing.
- Verify STAGES_DIR markers to avoid stale incremental state: [scripts/lib/stages.sh](scripts/lib/stages.sh)
- Normalize to PROJECT_ROOT inside container before stages if CWD shifts.
- Use DEBUG=1 to see safe_execute echo commands and outputs.

73
docs/TODO.md Normal file
View File

@@ -0,0 +1,73 @@
# Zero-OS Builder Persistent TODO
This canonical checklist is the single source of truth for ongoing work. It mirrors the live task tracker but is versioned in-repo for review and PRs. Jump-points reference exact functions and files for quick triage.
## High-level
- [x] Regenerate repository function index: [scripts/functionlist.md](../scripts/functionlist.md)
- [x] Refresh NOTES with jump-points and roadmap: [docs/NOTES.md](NOTES.md)
- [x] Extend RFS design with RESP/DB-style backend: [docs/rfs-flists.md](rfs-flists.md)
- [x] Make Rust components Git fetch non-destructive: [bash.components_download_git()](../scripts/lib/components.sh:72)
- [ ] Update zinit config for "zosception" workflow: [config/zinit/](../config/zinit/)
- [ ] Add zosstorage to the initramfs (package/build/install + zinit unit)
- [ ] Validate via minimal rebuild and boot tests; refine depmod/udev docs
- [ ] Commit and push documentation and configuration updates (post-zosception/zosstorage)
## Zosception (zinit service graph and ordering)
- [ ] Define service graph changes and ordering constraints
- Reference current triggers:
- Early coldplug: [config/zinit/udev-trigger.yaml](../config/zinit/udev-trigger.yaml)
- Network readiness: [config/zinit/network.yaml](../config/zinit/network.yaml)
- Post-mount coldplug: [config/zinit/udev-rfs.yaml](../config/zinit/udev-rfs.yaml)
- [ ] Add/update units under [config/zinit/](../config/zinit/) with proper after/needs/wants
- [ ] Validate with stage runner and logs
- Stage runner: [bash.stage_run()](../scripts/lib/stages.sh:99)
- Main flow: [bash.main_build_process()](../scripts/build.sh:214)
## ZOS Storage in initramfs
- [ ] Decide delivery mechanism:
- [ ] APK via [config/packages.list](../config/packages.list)
- [ ] Source build via [bash.components_parse_sources_conf()](../scripts/lib/components.sh:13) with a new build function
- [ ] Copy/install into initramfs
- Components copy: [bash.initramfs_copy_components()](../scripts/lib/initramfs.sh:102)
- Zinit setup: [bash.initramfs_setup_zinit()](../scripts/lib/initramfs.sh:13)
- [ ] Create zinit unit(s) for zosstorage startup and ordering
- Place after network and before RFS if it provides storage used by rfs
- [ ] Add smoke command to confirm presence in image (e.g., which/--version) during [bash.initramfs_validate()](../scripts/lib/initramfs.sh:820)
## RFS backends — implementation follow-up (beyond design)
- [ ] RESP uploader shim for packers (non-breaking)
- Packers entrypoints: [scripts/rfs/pack-modules.sh](../scripts/rfs/pack-modules.sh), [scripts/rfs/pack-firmware.sh](../scripts/rfs/pack-firmware.sh)
- Config loader: [bash.rfs_common_load_rfs_s3_config()](../scripts/rfs/common.sh:82) → extend to parse RESP_* (non-breaking)
- Store URI builder (S3 exists): [bash.rfs_common_build_s3_store_uri()](../scripts/rfs/common.sh:137)
- Manifest patching remains:
- Stores table: [bash.rfs_common_patch_flist_stores()](../scripts/rfs/common.sh:385)
- route.url: [bash.rfs_common_patch_flist_route_url()](../scripts/rfs/common.sh:494)
- [ ] Connectivity checks and retries for RESP path
- [ ] Local cache for pack-time (optional)
## Validation and boot tests
- [ ] Minimal rebuild after zinit/units/edit
- Helper: [scripts/rebuild-after-zinit.sh](../scripts/rebuild-after-zinit.sh)
- [ ] Validate contents before CPIO
- Create: [bash.initramfs_create_cpio()](../scripts/lib/initramfs.sh:691)
- Validate: [bash.initramfs_validate()](../scripts/lib/initramfs.sh:820)
- [ ] QEMU / cloud-hypervisor smoke tests
- Test runner: [runit.sh](../runit.sh)
- [ ] Kernel embed path and versioning sanity
- Embed config: [bash.kernel_modify_config_for_initramfs()](../scripts/lib/kernel.sh:130)
- Full version logic: [bash.kernel_get_full_version()](../scripts/lib/kernel.sh:14)
## Operational conventions (keep)
- [ ] Diagnostics-first changes; add logs before fixes
- [ ] Absolute path normalization respected
- Normalization: [scripts/lib/common.sh](../scripts/lib/common.sh:244)
Notes
- Keep this file in sync with the live tracker. Reference it in PR descriptions.
- Use the clickable references above for rapid navigation.

71
docs/depmod-behavior.md Normal file
View File

@@ -0,0 +1,71 @@
# depmod behavior, impact on lazy-mounted module stores, and flist store rewriting
Summary (short answer)
- depmod builds the modules dependency/alias databases by scanning the modules tree under /lib/modules/<kernel>. It reads metadata from each .ko file (.modinfo section) to generate:
- modules.dep(.bin), modules.alias(.bin), modules.symbols(.bin), modules.devname, modules.order, etc.
- It does not load modules; it opens many files for small reads. On a lazy store, the first depmod run can trigger many object fetches.
- If modules metadata files are already present and consistent (as produced during build), modprobe can work without re-running depmod. Use depmod -A (update only) or skip depmod entirely if timestamps and paths are unchanged.
- For private S3 (garage) without anonymous read, post-process the .fl manifest to replace the store URI with your HTTPS web endpoint for that bucket, so runtime mounts fetch over the web endpoint instead of signed S3.
Details
1) What depmod actually reads/builds
- Inputs scanned under /lib/modules/<kernel>:
- .ko files: depmod reads ELF .modinfo to collect depends=, alias=, vermagic, etc. It does not execute or load modules.
- modules.builtin and modules.builtin.modinfo: indicate built-in drivers so they are excluded from dep graph.
- Optional flags:
- depmod -F <System.map> and -E <Module.symvers> allow symbol/CRC checks; these are typically not required on target systems for generating dependency/alias maps.
- Outputs (consumed by modprobe/kmod):
- modules.dep and modules.dep.bin: dependency lists and fast index
- modules.alias and modules.alias.bin: modalias to module name mapping
- modules.symbols(.bin), modules.devname, modules.order, etc.
Key property: depmods default operation opens many .ko files to read .modinfo, which on a lazy FUSE-backed store causes many small reads.
2) Recommended strategy with lazy flists
- Precompute metadata during build:
- In the dev container, your pipeline already runs depmod (see [kernel_build_modules()](scripts/lib/kernel.sh:228)). Ensure the resulting metadata files in /lib/modules/<kernel> are included in the modules flist.
- At runtime after overmounting the modules flist:
- Option A: Do nothing. If your path is the same (/lib/modules/<kernel>), modprobe will use the precomputed .bin maps and will not need to rescan .ko files. This minimizes object fetches (only when a module is actually loaded).
- Option B: Run depmod -A <kernel> (update only if any .ko newer than modules.dep). This typically performs stats on files and only rebuilds if needed, avoiding a full read of all .ko files.
- Option C: Run depmod -a only if you changed the module set or path layout. Expect many small reads on first run.
3) Firmware implications
- No depmod impact, but udev coldplug will probe devices. Keep firmware files accessible via the firmware flist mount (e.g., /lib/firmware).
- Since firmware loads on-demand by the kernel/driver, the lazy store will fetch only needed blobs.
4) Post-processing .fl to use a web endpoint (garage S3 private)
- Goal: Pack/upload blobs to private S3 using credentials, but ship a manifest (.fl) that references a public HTTPS endpoint (or authenticated gateway) that your rfs mount can fetch from without S3 signing.
- Approach A: Use rfs CLI (if supported) to edit store URIs within the manifest.
- Example (conceptual): rfs flist edit-store -m dist/flists/modules-...fl --set https://web.example.com/bucket/prefix
- Approach B: Use sqlite3 to patch the manifest directly (the .fl is sqlite):
- Inspect stores:
- sqlite3 dist/flists/modules-...fl "SELECT id, uri FROM stores;"
- Replace s3 store with web endpoint:
- sqlite3 dist/flists/modules-...fl "UPDATE stores SET uri='https://web.example.com/bucket/prefix' WHERE uri LIKE 's3://%';"
- Validate:
- rfs flist inspect dist/flists/modules-...fl
- Notes:
- The web endpoint you provide must serve the same content-addressed paths that rfs expects. Confirm the object path layout (e.g., /bucket/prefix/ab/cd/abcdef...).
- You can maintain multiple store rows to provide fallbacks (if rfs supports trying multiple stores).
5) Suggested runtime sequence after overmount (with precomputed metadata)
- Mount modules flist read-only at /lib/modules/<kernel>.
- Optionally depmod -A <kernel> (cheap; no full scan).
- udevadm control --reload; udevadm trigger --action=add; udevadm settle
- Load required baseline modules (stage1) if needed; the lazy store ensures only requested .ko files are fetched.
6) Practical checklist for our scripts
- Ensure pack-modules includes:
- /lib/modules/<kernel>/*.ko*
- All modules.* metadata files (dep, alias, symbols, order, builtin, *.bin)
- After pack completes and blobs are uploaded to S3, patch the .fl manifests stores table to the public HTTPS endpoint of your garage bucket/web gateway.
- Provide verify utilities:
- rfs flist inspect/tree
- Optional local mount test against the web endpoint referenced in the manifest.
Appendix: Commands and flags
- Generate/update metadata (build-time): depmod -a <kernel>
- Fast update at boot: depmod -A <kernel> # only if newer/changed
- Chroot/base path (useful for initramfs image pathing): depmod -b <base> -a <kernel>
- Modprobe uses *.bin maps when present, which avoids parsing large text maps on every lookup.

BIN
docs/img_1758452705037.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB

View File

@@ -0,0 +1,178 @@
# Review: Current Build Flow and RFS Integration Hook Points
This document reviews the current Zero-OS Alpine initramfs build flow, identifies reliable sources for kernel versioning and artifacts, and specifies clean integration points for RFS flist generation and later runtime overmounts without modifying existing code paths.
## Build flow overview
Primary orchestrator: [scripts/build.sh](scripts/build.sh)
Key sourced libraries:
- [alpine.sh](scripts/lib/alpine.sh)
- [components.sh](scripts/lib/components.sh)
- [kernel.sh](scripts/lib/kernel.sh)
- [initramfs.sh](scripts/lib/initramfs.sh)
- [stages.sh](scripts/lib/stages.sh)
- [docker.sh](scripts/lib/docker.sh)
Main stages executed (incremental via [stage_run()](scripts/lib/stages.sh:99)):
1) alpine_extract, alpine_configure, alpine_packages
2) alpine_firmware
3) components_build, components_verify
4) kernel_modules
5) init_script, components_copy, zinit_setup
6) modules_setup, modules_copy
7) cleanup, validation
8) initramfs_create, initramfs_test, kernel_build
9) boot_tests
## Where key artifacts come from
- Kernel full version:
- Derived deterministically using [kernel_get_full_version()](scripts/lib/kernel.sh:14)
- Computed as: KERNEL_VERSION from [config/build.conf](config/build.conf) + CONFIG_LOCALVERSION from [config/kernel.config](config/kernel.config)
- Example target: 6.12.44-Zero-OS
- Built modules in container:
- Stage: [kernel_build_modules()](scripts/lib/kernel.sh:228)
- Builds and installs into container root: /lib/modules/<FULL_VERSION>
- Runs depmod in container and sets:
- CONTAINER_MODULES_PATH=/lib/modules/<FULL_VERSION>
- KERNEL_FULL_VERSION=<FULL_VERSION>
- Initramfs modules copy and metadata:
- Stage: [initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:846)
- Copies selected modules and dep metadata into initramfs under initramfs/lib/modules/<FULL_VERSION>
- Firmware content:
- Preferred (per user): a full tree at $root/firmware in the dev-container, intended to be packaged as-is
- Fallback within build flow: firmware packages installed by [alpine_install_firmware()](scripts/lib/alpine.sh:392) into initramfs/lib/firmware
- rfs binary:
- Built via [build_rfs()](scripts/lib/components.sh:299) into [components/rfs/target/x86_64-unknown-linux-musl/release/rfs](components/rfs/target/x86_64-unknown-linux-musl/release/rfs)
- Also expected to be available on PATH inside dev-container
## udev and module load sequencing at runtime
- zinit units present:
- udevd: [config/zinit/udevd.yaml](config/zinit/udevd.yaml)
- depmod: [config/zinit/depmod.yaml](config/zinit/depmod.yaml)
- udev trigger: [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml) calling [udev.sh](config/zinit/init/udev.sh)
- initramfs module orchestration:
- Module resolution logic: [initramfs_setup_modules()](scripts/lib/initramfs.sh:225) and [initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:313)
- Load scripts created for zinit:
- stage1: [initramfs_create_module_scripts()](scripts/lib/initramfs.sh:422) emits /etc/zinit/init/stage1-modules.sh
- stage2 is currently disabled in config
## Current integration gaps for RFS flists
- There is no existing code that:
- Packs modules or firmware into RFS flists (.fl sqlite manifests)
- Publishes associated content-addressed blobs to a store
- Uploads the .fl manifest to an S3 bucket (separate from the blob store)
- Mounts these flists at runtime prior to udev coldplug
## Reliable inputs for RFS pack
- Kernel full version: use [kernel_get_full_version()](scripts/lib/kernel.sh:14) logic (never `uname -r` inside container)
- Modules source tree candidates (priority):
1) /lib/modules/<FULL_VERSION> (from [kernel_build_modules()](scripts/lib/kernel.sh:228))
2) initramfs/lib/modules/<FULL_VERSION> (if container path unavailable; less ideal)
- Firmware source tree candidates (priority):
1) $PROJECT_ROOT/firmware (external provided tree; user-preferred)
2) initramfs/lib/firmware (APK-installed fallback)
## S3 configuration needs
A new configuration file is required to avoid touching existing code:
- Path: config/rfs.conf (to be created)
- Required keys:
- S3_ENDPOINT (e.g., https://s3.example.com:9000)
- S3_REGION
- S3_BUCKET
- S3_PREFIX (path prefix under bucket for blobs/optionally manifests)
- S3_ACCESS_KEY
- S3_SECRET_KEY
- These values will be consumed by standalone scripts (not existing build flow)
## Proposed standalone scripts (no existing code changes)
Directory: scripts/rfs
- common.sh
- Read [config/build.conf](config/build.conf), [config/kernel.config](config/kernel.config) to compute FULL_KERNEL_VERSION
- Read [config/rfs.conf](config/rfs.conf) and construct RFS S3 store URI
- Detect rfs binary from PATH or [components/rfs](components/rfs)
- Locate modules and firmware source trees per the above priority order
- pack-modules.sh
- Name: modules-<FULL_KERNEL_VERSION>.fl
- Command: rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/<FULL_VERSION>
- Then upload the .fl manifest to s3://BUCKET/PREFIX/manifests/ via aws CLI if available
- pack-firmware.sh
- Name: firmware-<YYYYMMDD>.fl by default, overridable via FIRMWARE_TAG
- Source: $PROJECT_ROOT/firmware preferred, else initramfs/lib/firmware
- Pack with rfs and upload the .fl manifest similarly
- verify-flist.sh
- rfs flist inspect dist/flists/NAME.fl
- rfs flist tree dist/flists/NAME.fl | head
- Optional test mount with a temporary mountpoint when requested
## Future runtime units (deferred)
Will be added as new zinit units once flist generation is validated:
- Mount firmware flist read-only at /lib/firmware (overmount to hide initramfs firmware beneath)
- Mount modules flist read-only at /lib/modules/<FULL_VERSION>
- Run depmod -a <FULL_VERSION>
- Run udev coldplug sequence (reload, trigger add, settle)
Placement relative to current units:
- Must occur before [udev-trigger.yaml](config/zinit/udev-trigger.yaml)
- Should ensure [depmod.yaml](config/zinit/depmod.yaml) is sequenced after modules are available from mount
## Flow summary (Mermaid)
```mermaid
flowchart TD
A[Build start] --> B[alpine_extract/configure/packages]
B --> C[components_build verify]
C --> D[kernel_modules
install modules in container
set KERNEL_FULL_VERSION]
D --> E[init_script zinit_setup]
E --> F[modules_setup copy]
F --> G[cleanup validation]
G --> H[initramfs_create test kernel_build]
H --> I[boot_tests]
subgraph RFS standalone
R1[Compute FULL_VERSION
from configs]
R2[Select sources:
modules /lib/modules/FULL_VERSION
firmware PROJECT_ROOT/firmware or initramfs/lib/firmware]
R3[Pack modules flist
rfs pack -s s3://...]
R4[Pack firmware flist
rfs pack -s s3://...]
R5[Upload .fl manifests
to S3 manifests/]
R6[Verify flists
inspect/tree/mount opt]
end
H -. post-build manual .-> R1
R1 --> R2 --> R3 --> R5
R2 --> R4 --> R5
R3 --> R6
R4 --> R6
```
## Conclusion
- The existing build flow provides deterministic kernel versioning and installs modules into the container at /lib/modules/<FULL_VERSION>, which is ideal for RFS packing.
- Firmware can be sourced from the user-provided tree or the initramfs fallback.
- RFS flist creation and publishing can be introduced entirely as standalone scripts and configuration without modifying current code.
- Runtime overmounting and coldplug can be added later via new zinit units once flist generation is validated.

209
docs/rfs-flists.md Normal file
View File

@@ -0,0 +1,209 @@
# RFS flist creation and runtime overmounts (design)
Goal
- Produce two flists without modifying existing build scripts:
- firmware-VERSION.fl
- modules-KERNEL_FULL_VERSION.fl
- Store blobs in S3 via rfs store; upload .fl manifest (sqlite) separately to S3.
- Overmount these at runtime later to enable extended hardware, then depmod + udev trigger.
Scope of this change
- Add standalone scripts under [scripts/rfs](scripts/rfs) (no changes in existing libs or stages).
- Add a config file [config/rfs.conf](config/rfs.conf) for S3 credentials and addressing.
- Document the flow and usage here; scripting comes next.
Inputs
- Built kernel modules present in the dev-container (from kernel build stages):
- Preferred: /lib/modules/KERNEL_FULL_VERSION
- Firmware source for RFS pack:
- Install all Alpine linux-firmware* packages into the build container and use /lib/firmware as the source (full set).
- Initramfs fallback (build-time):
- Selective firmware packages installed by [bash.alpine_install_firmware()](scripts/lib/alpine.sh:392) into initramfs/lib/firmware (kept inside the initramfs).
- Kernel version derivation (never use uname -r in container):
- Combine KERNEL_VERSION from [config/build.conf](config/build.conf) and LOCALVERSION from [config/kernel.config](config/kernel.config).
- This matches [kernel_get_full_version()](scripts/lib/kernel.sh:14).
Outputs and locations
- Flists:
- [dist/flists/firmware-VERSION.fl](dist/flists/firmware-VERSION.fl)
- [dist/flists/modules-KERNEL_FULL_VERSION.fl](dist/flists/modules-KERNEL_FULL_VERSION.fl)
- Blobs are uploaded by rfs to the configured S3 store.
- Manifests (.fl sqlite) are uploaded by script as S3 objects (separate from blob store).
Configuration: [config/rfs.conf](config/rfs.conf)
Required values:
- S3_ENDPOINT=https://s3.example.com:9000
- S3_REGION=us-east-1
- S3_BUCKET=zos
- S3_PREFIX=flists/zosbuilder
- S3_ACCESS_KEY=AKIA...
- S3_SECRET_KEY=...
Notes:
- We construct an rfs S3 store URI for pack operations (for blob uploads during pack):
- s3://S3_ACCESS_KEY:S3_SECRET_KEY@HOST:PORT/S3_BUCKET/S3_PREFIX?region=S3_REGION
- After pack, we correct the flist route URL to include READ-ONLY credentials so mounts can read directly from Garage:
- UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@HOST:PORT/ROUTE_PATH?region=ROUTE_REGION'
- Defaults: ROUTE_PATH=/blobs, ROUTE_REGION=garage, ROUTE_ENDPOINT=S3_ENDPOINT (overridable)
Scripts to add (standalone)
- [scripts/rfs/common.sh](scripts/rfs/common.sh)
- Read [config/build.conf](config/build.conf) and [config/kernel.config](config/kernel.config).
- Compute FULL_KERNEL_VERSION exactly as [kernel_get_full_version()](scripts/lib/kernel.sh:14).
- Read and validate [config/rfs.conf](config/rfs.conf).
- Build S3 store URI for rfs.
- Locate module and firmware source trees (with priority rules).
- Locate rfs binary (PATH first, fallback to [components/rfs/target/x86_64-unknown-linux-musl/release/rfs](components/rfs/target/x86_64-unknown-linux-musl/release/rfs)).
- [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh)
- Name: modules-KERNEL_FULL_VERSION.fl (e.g., modules-6.12.44-Zero-OS.fl).
- rfs pack -m dist/flists/modules-...fl -s s3://... /lib/modules/KERNEL_FULL_VERSION
- Optional: upload dist/flists/modules-...fl to s3://S3_BUCKET/S3_PREFIX/manifests/ using MinIO Client (mc) if present.
- [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh)
- Source: $PROJECT_ROOT/firmware if exists, else initramfs/lib/firmware.
- Name: firmware-YYYYMMDD.fl by default; override with FIRMWARE_TAG env to firmware-FIRMWARE_TAG.fl.
- rfs pack as above; optional upload of the .fl manifest using MinIO Client (mc) if present.
- [scripts/rfs/verify-flist.sh](scripts/rfs/verify-flist.sh)
- rfs flist inspect dist/flists/NAME.fl
- rfs flist tree dist/flists/NAME.fl | head
- Optional: test mount if run with --mount (mountpoint under /tmp).
Runtime (deferred to a follow-up)
- New zinit units to mount and coldplug:
- Mount firmware flist read-only at /lib/firmware
- Mount modules flist at /lib/modules/KERNEL_FULL_VERSION
- Run depmod -a KERNEL_FULL_VERSION
- udevadm control --reload; udevadm trigger --action=add; udevadm settle
- Placement examples (to be created later):
- [config/zinit/rfs-modules.yaml](config/zinit/rfs-modules.yaml)
- [config/zinit/rfs-firmware.yaml](config/zinit/rfs-firmware.yaml)
- Keep in correct dependency order before [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml).
Naming policy
- modules flist:
- modules-KERNEL_FULL_VERSION.fl
- firmware flist:
- firmware-YYYYMMDD.fl by default
- firmware-FIRMWARE_TAG.fl if env FIRMWARE_TAG is set
Usage flow (after your normal build inside dev-container)
1) Create config for S3: [config/rfs.conf](config/rfs.conf)
2) Generate modules flist: [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh)
3) Generate firmware flist: [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh)
4) Verify manifests: [scripts/rfs/verify-flist.sh](scripts/rfs/verify-flist.sh) dist/flists/modules-...fl
Assumptions
- rfs supports s3 store URIs as described (per [components/rfs/README.md](components/rfs/README.md)).
- The dev-container has the built kernel modules in /lib/modules/KERNEL_FULL_VERSION (as produced via [kernel_build_modules()](scripts/lib/kernel.sh:228)).
- No changes are made to existing build scripts. The new scripts are run on-demand.
Open question for confirm
- Confirm S3 endpoint form (with or without explicit port) and whether we should prefer AWS_REGION env over query param; scripts will support both patterns.
Note on route URL vs HTTP endpoint
- rfs mount reads blobs via s3:// URLs, not via an arbitrary HTTP(S) endpoint. A reverse proxy is not required if you embed read-only S3 credentials in the flist.
- This project now patches the flist after pack to set route.url to a read-only Garage S3 URL:
- Example SQL equivalent:
- UPDATE route SET url='s3://READ_ACCESS_KEY:READ_SECRET_KEY@[HOST]:3900/blobs?region=garage';
- Configure these in config/rfs.conf:
- READ_ACCESS_KEY / READ_SECRET_KEY: read-only credentials
- ROUTE_ENDPOINT (defaults to S3_ENDPOINT), ROUTE_PATH=/blobs, ROUTE_REGION=garage
- Do not set ROUTE_PATH to S3_PREFIX. ROUTE_PATH is the gateways blob route (usually /blobs). S3_PREFIX is only for the pack-time store path.
## Runtime units and ordering (zinit)
This repo now includes runtime zinit units and init scripts to mount the RFS flists and perform dual udev coldplug sequences.
- Early coldplug (before RFS mounts):
- [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml) calls [config/zinit/init/udev.sh](config/zinit/init/udev.sh).
- Runs after depmod/udev daemons to initialize NICs and other devices using what is already in the initramfs.
- Purpose: bring up networking so RFS can reach Garage S3.
- RFS mounts (daemons, after network):
- [config/zinit/rfs-modules.yaml](config/zinit/rfs-modules.yaml) runs [config/zinit/init/modules.sh](config/zinit/init/modules.sh) to mount modules-$(uname -r).fl onto /lib/modules/$(uname -r).
- [config/zinit/rfs-firmware.yaml](config/zinit/rfs-firmware.yaml) runs [config/zinit/init/firmware.sh](config/zinit/init/firmware.sh) to mount firmware-latest.fl onto /usr/lib/firmware.
- Both are defined as restart: always and include after: network to ensure the Garage S3 route is reachable.
- Post-mount coldplug (after RFS mounts):
- [config/zinit/udev-rfs.yaml](config/zinit/udev-rfs.yaml) performs:
- udevadm control --reload
- udevadm trigger --action=add --type=subsystems
- udevadm trigger --action=add --type=devices
- udevadm settle
- This re-probes hardware so new modules/firmware from the overmounted flists are considered.
- Embedded manifests in initramfs:
- The build embeds the flists under /etc/rfs:
- modules-KERNEL_FULL_VERSION.fl
- firmware-latest.fl
- Creation happens in [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh) and [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh), and embedding is orchestrated by [scripts/build.sh](scripts/build.sh).
## Reproducible firmware tagging
- The firmware flist name can be pinned via FIRMWARE_TAG in [config/build.conf](config/build.conf).
- If set: firmware-FIRMWARE_TAG.fl
- If unset: the build uses firmware-latest.fl for embedding (standalone pack may default to date-based).
- The build logic picks the tag with this precedence:
1) Environment FIRMWARE_TAG
2) FIRMWARE_TAG from [config/build.conf](config/build.conf)
3) "latest"
- Build integration implemented in [scripts/build.sh](scripts/build.sh).
Example:
- Set FIRMWARE_TAG in config: add FIRMWARE_TAG="20250908" in [config/build.conf](config/build.conf)
- Or export at build time: export FIRMWARE_TAG="v1"
## Verifying flists
Use the helper to inspect a manifest, optionally listing entries and testing a local mount (root + proper FUSE policy required):
- Inspect only:
- scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl
- Inspect + tree:
- scripts/rfs/verify-flist.sh -m dist/flists/firmware-latest.fl --tree
- Inspect + mount test to a temp dir:
- sudo scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl --mount
## Additional blob store backends (design)
This extends the existing S3/HTTP approach with a RESP/DB-style backend option for rfs blob storage. It is a design-only addition; CLI and scripts will be extended in a follow-up.
Scope
- Keep S3 flow intact via [scripts/rfs/common.sh](scripts/rfs/common.sh:137), [scripts/rfs/common.sh](scripts/rfs/common.sh:385), and [scripts/rfs/common.sh](scripts/rfs/common.sh:494).
- Introduce RESP URIs that can be encoded in config and, later, resolved by rfs or a thin uploader shim invoked by:
- [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh:1)
- [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
URI schemes (draft)
- resp://host:port/db?prefix=blobs
- resp+tls://host:port/db?prefix=blobs&amp;ca=/etc/ssl/certs/ca.pem
- resp+sentinel://sentinelHost:26379/mymaster?prefix=blobs
- Credentials may be provided via URI userinfo or config (recommended: config only).
Operations (minimal set)
- PUT blob: write content-addressed key (e.g., prefix/ab/cd/hash)
- GET blob: fetch by exact key
- Exists/HEAD: presence test by key
- Optional batching: pipelined MGET for prefetch
Config keys (see example additions in config/rfs.conf.example)
- RESP_ENDPOINT (host:port), RESP_DB (integer), RESP_PREFIX (path namespace)
- RESP_USERNAME/RESP_PASSWORD (optional), RESP_TLS=0/1 (+ RESP_CA if needed)
- RESP_SENTINEL and RESP_MASTER for sentinel deployments
Manifests and routes
- Keep S3 store in flist stores table (fallback) while enabling route.url patching to HTTP/S3 for read-only access:
- Patch stores table as today via [scripts/rfs/common.sh](scripts/rfs/common.sh:385)
- Patch route.url as today via [scripts/rfs/common.sh](scripts/rfs/common.sh:494)
- RESP may be used primarily for pack-time blob uploads or as an additional store the CLI can consume later.
Security
- Do not embed write credentials in manifests.
- Read-only credentials may be embedded in route.url if required, mirroring S3 pattern.
Next steps
- Implement RESP uploader shim called from pack scripts; keep the CLI S3 flow unchanged.
- Extend config loader in [scripts/rfs/common.sh](scripts/rfs/common.sh:82) to parse RESP_* variables.
- Add verification routines to sanity-check connectivity before pack.

Some files were not shown because too many files have changed in this diff Show More