Compare commits

...

17 Commits

Author SHA1 Message Date
947d156921 Added youki build and fromatting of scripts
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-11-11 20:49:36 +01:00
721e26a855 build: remove testing.sh in favor of runit.sh; add claude.md reference
Replace inline boot testing with standalone runit.sh runner for clarity:
- Remove scripts/lib/testing.sh source and boot_tests stage from build.sh
- Remove --skip-tests option from build.sh and rebuild-after-zinit.sh
- Update all docs to reference runit.sh for QEMU/cloud-hypervisor testing
- Add comprehensive claude.md as AI assistant entry point with guidelines

Testing is now fully decoupled from build pipeline; use ./runit.sh for
QEMU/cloud-hypervisor validation after builds complete.
2025-11-04 13:47:24 +01:00
334821dacf Integrate zosstorage build path and runtime orchestration
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
Summary:

* add openssh-client to the builder image and mount host SSH keys into the dev container when available

* switch RFS to git builds, register the zosstorage source, and document the extra Rust component

* wire zosstorage into the build: add build_zosstorage(), ship the binary in the initramfs, and extend component validation

* refresh kernel configuration to 6.12.49 while dropping Xen guest selections and enabling counted-by support

* tighten runtime configs: use cached mycelium key path, add zosstorage zinit unit, bootstrap ovsdb-server, and enable openvswitch module

* adjust the network health check ping invocation and fix the RFS pack-tree --debug flag order

* update NOTES changelog, README component list, and introduce a runit helper for qemu/cloud-hypervisor testing

* add ovsdb init script wiring under config/zinit/init and ensure zosstorage is available before mycelium
2025-10-14 17:47:13 +02:00
cf05e0ca5b rfs: add pack-tree.sh to pack arbitrary directory to flist using config/rfs.conf; enable --debug on rfs pack for verbose diagnostics
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-10-02 17:41:16 +02:00
883ffcf734 components: read config/sources.conf to determine components, versions, and build funcs; remove hardcoded list. verification: accept rfs built or prebuilt binary paths. 2025-10-02 17:13:35 +02:00
818f5037f4 docs(TODO): use relative links from docs/ to ../scripts and ../config so links work in GitHub and VS Code 2025-10-02 11:44:41 +02:00
d5e9bf2d9a docs: add persistent TODO.md checklist with clickable references to code and configs 2025-10-01 18:09:28 +02:00
10ba31acb4 docs: regenerate scripts/functionlist.md; refresh NOTES with jump-points and roadmap; extend rfs-flists with RESP backend design. config: add RESP placeholders to rfs.conf.example. components: keep previous non-destructive git clone logic. 2025-10-01 18:06:13 +02:00
6193d241ea components: reuse existing git tree in components_download_git; config: update packages.list 2025-10-01 17:47:51 +02:00
4ca68ac0f7 Configuration changes:
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
- kernel config changes
- kernel version bump
- added sgdisk to initramfs packages for zosstorage to work
2025-09-30 14:41:49 +02:00
404e421411 toolfixes
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-09-25 11:49:12 +02:00
d529d53827 Update README.md
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-09-24 08:10:52 +00:00
2142876e3d Fixes:
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
- no getty on serial (console if specified)
  - zinit sequence
2025-09-24 01:42:18 +02:00
c70143acb8 Some fixes
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
- added piciutils
  - zinit sequence fixes
2025-09-24 00:37:20 +02:00
ad0a06e267 initramfs+modules: robust copy aliasing, curated stage1 + PHYs, firmware policy via firmware.conf, runtime readiness, build ID; docs sync
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
Summary of changes (with references):\n\nModules + PHY coverage\n- Curated and normalized stage1 list in [config.modules.conf](config/modules.conf:1):\n  - Boot-critical storage, core virtio, common NICs (Intel/Realtek/Broadcom), overlay/fuse, USB HCD/HID.\n  - Added PHY drivers required by NIC MACs:\n    * realtek (for r8169, etc.)\n    * broadcom families: broadcom, bcm7xxx, bcm87xx, bcm_phy_lib, bcm_phy_ptp\n- Robust underscore↔hyphen aliasing during copy so e.g. xhci_pci → xhci-pci.ko, hid_generic → hid-generic.ko:\n  - [bash.initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:990)\n\nFirmware policy and coverage\n- Firmware selection now authoritative via [config/firmware.conf](config/firmware.conf:1); ignore modules.conf firmware hints:\n  - [bash.initramfs_setup_modules()](scripts/lib/initramfs.sh:229)\n  - Count from firmware.conf for reporting; remove stale required-firmware.list.\n- Expanded NIC firmware set (bnx2, bnx2x, tigon, intel, realtek, rtl_nic, qlogic, e100) in [config.firmware.conf](config/firmware.conf:1).\n- Installer enforces firmware.conf source-of-truth in [bash.alpine_install_firmware()](scripts/lib/alpine.sh:392).\n\nEarly input & build freshness\n- Write a runtime build stamp to /etc/zero-os-build-id for embedded initramfs verification:\n  - [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:568)\n- Minor init refinements in [config.init](config/init:1) (ensures /home, consistent depmod path).\n\nRebuild helper improvements\n- [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh:1):\n  - Added --verify-only; container-aware execution; selective marker clears only.\n  - Prints stage status before/after; avoids --rebuild-from; resolves full kernel version for diagnostics.\n\nRemote flist readiness + zinit\n- Init scripts now probe BASE_URL readiness and accept FLISTS_BASE_URL/FLIST_BASE_URL; firmware target is /lib/firmware:\n  - [sh.firmware.sh](config/zinit/init/firmware.sh:1)\n  - [sh.modules.sh](config/zinit/init/modules.sh:1)\n\nContainer, docs, and utilities\n- Stream container build logs by calling runtime build directly in [bash.docker_build_container()](scripts/lib/docker.sh:56).\n- Docs updated to reflect firmware policy, runtime readiness, rebuild helper, early input, and GRUB USB:\n  - [docs.NOTES.md](docs/NOTES.md)\n  - [docs.PROMPT.md](docs/PROMPT.md)\n  - [docs.review-rfs-integration.md](docs/review-rfs-integration.md)\n- Added GRUB USB creator (referenced in docs): [scripts/make-grub-usb.sh](scripts/make-grub-usb.sh)\n\nCleanup\n- Removed legacy/duplicated config trees under configs/ and config/zinit.old/.\n- Minor newline and ignore fixes: [.gitignore](.gitignore:1)\n\nNet effect\n- Runtime now has correct USB HCDs/HID-generic and NIC+PHY coverage (Realtek/Broadcom), with matching firmware installed in initramfs.\n- Rebuild workflow is minimal and host/container-aware; docs are aligned with implemented behavior.\n
2025-09-23 14:03:01 +02:00
2fba2bd4cd initramfs+kernel: path anchors, helper, and init debug hook
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
initramfs: anchor relative paths to PROJECT_ROOT in [bash.initramfs_validate()](scripts/lib/initramfs.sh:799) and [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688) to avoid CWD drift. Add diagnostics logs.

kernel: anchor kernel output path to PROJECT_ROOT in [bash.kernel_build_with_initramfs()](scripts/lib/kernel.sh:174) to ensure dist/vmlinuz.efi is under PROJECT_ROOT/dist.

helper: add [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh) to incrementally rebuild after zinit/modules.conf/init changes. Default: initramfs-only (recreates cpio). Flags: --with-kernel, --refresh-container-mods, --run-tests. Uses --rebuild-from=initramfs_create when rebuilding kernel.

init: add early debug shell on kernel param initdebug=true; prefer /init-debug when present else spawn /bin/sh -l. See [config/init](config/init:1).

modules(stage1): add USB keyboard support (HID + host controllers) in [config/modules.conf](config/modules.conf:1): usbhid, hid_generic, hid, xhci/ehci/ohci/uhci.
2025-09-20 16:11:44 +02:00
310e11d2bf rfs(firmware): pack full Alpine linux-firmware set from container and overmount /lib/firmware
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
Change pack source: install all linux-firmware* packages in container and pack from /lib/firmware via [bash.rfs_common_install_all_alpine_firmware_packages()](scripts/rfs/common.sh:290) used by [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:21). At runtime, overmount firmware flist on /lib/firmware by updating [sh.firmware.sh](config/zinit/init/firmware.sh:10). Update docs to reflect /lib/firmware mount and new pack strategy.
2025-09-19 08:27:10 +02:00
1132 changed files with 3929 additions and 347084 deletions

33
AGENTS.md Normal file
View File

@@ -0,0 +1,33 @@
# AGENTS.md
This file provides guidance to agents when working with code in this repository.
Non-obvious build/run facts (read from scripts):
- Always build inside the containerized toolchain; ./scripts/build.sh will spawn a transient container, or use the persistent dev container via ./scripts/dev-container.sh start then ./scripts/dev-container.sh build.
- The incremental pipeline is stage-driven with .build-stages markers; list status via ./scripts/build.sh --show-stages and force a subset with --rebuild-from=<stage>.
- Outputs are anchored to PROJECT_ROOT (normalized in [bash.common.sh](scripts/lib/common.sh:236)):
- Kernel: dist/vmlinuz.efi (created by [bash.kernel_build_with_initramfs()](scripts/lib/kernel.sh:174))
- Initramfs archive: dist/initramfs.cpio.xz (created by [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688))
- Kernel embeds the initramfs (CONFIG_INITRAMFS_SOURCE set by [bash.kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:130)); no separate initrd is required in normal flow.
Fast iteration helpers (init/zinit/modules.conf):
- Use ./scripts/rebuild-after-zinit.sh to re-copy /init, re-apply zinit, re-resolve/copy modules, and recreate the cpio (initramfs-only by default).
- Flags: --with-kernel (also re-embed kernel; forces --rebuild-from=initramfs_create), --refresh-container-mods (rebuild container /lib/modules if missing), --verify-only (report changes since last cpio).
- DEBUG=1 shows full safe_execute logs and stage timings.
Critical conventions (avoid breakage):
- Use logging/safety helpers from [bash.common.sh](scripts/lib/common.sh:1): log_info/warn/error/debug, safe_execute, section_header.
- Paths must be anchored to PROJECT_ROOT (already normalized after sourcing config) to avoid CWD drift (kernel builds cd into kernel/current).
- Do not edit /etc/shadow directly; passwordless root is applied by chroot ${initramfs_dir} passwd -d root in [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575).
- initdebug=true on kernel cmdline opens an early shell from [config/init](config/init) even without /init-debug.
Modules and firmware (non-obvious flow):
- Container /lib/modules/<FULL_VERSION> is the authoritative source for dependency resolution and copying into initramfs (ensure kernel_modules stage ran at least once).
- RFS firmware pack now installs all linux-firmware* into container and packs from /lib/firmware; runtime overmount targets /lib/firmware (not /usr/lib/firmware).
Troubleshooting gotchas:
- If “Initramfs directory not found: initramfs” or kernel output in wrong place, path anchoring patches exist in [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688), [bash.initramfs_validate()](scripts/lib/initramfs.sh:799), and [bash.kernel_build_with_initramfs()](scripts/lib/kernel.sh:174); ensure DEBUG logs show “... anchored to PROJECT_ROOT”.
- On host, the rebuild helper delegates to ./scripts/dev-container.sh so you dont need to manually start the container.
Utilities:
- Create BIOS+UEFI USB with embedded-initramfs kernel: sudo ./scripts/make-grub-usb.sh /dev/sdX --kparams "console=ttyS0 initdebug=true" (use --with-initrd only if you want a separate initrd on ESP).

View File

@@ -7,6 +7,7 @@ RUN apk add --no-cache \
rustup \
upx \
git \
openssh-client \
wget \
curl \
tar \
@@ -19,6 +20,7 @@ RUN apk add --no-cache \
musl-utils \
pkgconfig \
openssl openssl-dev \
libseccomp libseccomp-dev \
perl \
shadow \
bash \

132
README.md
View File

@@ -7,7 +7,7 @@ A comprehensive build system for creating custom Alpine Linux 3.22 x86_64 initra
- **Alpine Linux 3.22** miniroot as base system
- **zinit** process manager (complete OpenRC replacement)
- **Rootless containers** (Docker/Podman compatible)
- **Rust components** with musl targeting (zinit, rfs, mycelium)
- **Rust components** with musl targeting (zinit, rfs, mycelium, zosstorage)
- **Aggressive optimization** (strip + UPX compression)
- **2-stage module loading** for hardware support
- **GitHub Actions** compatible build pipeline
@@ -103,8 +103,7 @@ zosbuilder/
│ │ ├── alpine.sh # Alpine operations
│ │ ├── components.sh # source building
│ │ ├── initramfs.sh # assembly & optimization
│ │ ── kernel.sh # kernel building
│ │ └── testing.sh # QEMU/cloud-hypervisor
│ │ ── kernel.sh # kernel building
│ ├── build.sh # main orchestrator
│ └── clean.sh # cleanup script
├── initramfs/ # build output (generated)
@@ -222,6 +221,7 @@ Services are migrated from existing `configs/zinit/` directory with proper initi
- **zinit**: Standard cargo build
- **rfs**: Standard cargo build
- **mycelium**: Build in `myceliumd/` subdirectory
- **zosstorage**: Build from the storage orchestration component for Zero-OS
4. Install binaries to initramfs
### Phase 4: System Configuration
@@ -243,34 +243,40 @@ Services are migrated from existing `configs/zinit/` directory with proper initi
### Phase 6: Packaging
1. Create `initramfs.cpio.xz` with XZ compression
2. Build kernel with embedded initramfs
3. Generate `vmlinuz.efi`
3. Generate `vmlinuz.efi` (default kernel)
4. Generate versioned kernel: `vmlinuz-{VERSION}-{ZINIT_HASH}.efi`
5. Optionally upload versioned kernel to S3 (set `UPLOAD_KERNEL=true`)
## Testing
### QEMU Testing
```bash
# Boot test with QEMU
./scripts/test.sh --qemu
# Boot test with QEMU (default)
./runit.sh
# With serial console
./scripts/test.sh --qemu --serial
# With custom parameters
./runit.sh --hypervisor qemu --memory 2048 --disks 3
```
### cloud-hypervisor Testing
```bash
# Boot test with cloud-hypervisor
./scripts/test.sh --cloud-hypervisor
./runit.sh --hypervisor ch
# With disk reset
./runit.sh --hypervisor ch --reset --disks 5
```
### Custom Testing
### Advanced Options
```bash
# Manual QEMU command
qemu-system-x86_64 \
-kernel dist/vmlinuz.efi \
-m 512M \
-nographic \
-serial mon:stdio \
-append "console=ttyS0,115200 console=tty1 loglevel=7"
# See all options
./runit.sh --help
# Custom disk size and bridge
./runit.sh --disk-size 20G --bridge zosbr --disks 4
# Environment variables
HYPERVISOR=ch NUM_DISKS=5 ./runit.sh
```
## Size Optimization
@@ -320,7 +326,7 @@ jobs:
- name: Build
run: ./scripts/build.sh
- name: Test
run: ./scripts/test.sh --qemu
run: ./runit.sh --hypervisor qemu
```
## Advanced Usage
@@ -353,6 +359,72 @@ function build_myapp() {
}
```
### S3 Uploads (Kernel & RFS Flists)
Automatically upload build artifacts to S3-compatible storage:
#### Configuration
Create `config/rfs.conf`:
```bash
S3_ENDPOINT="https://s3.example.com:9000"
S3_REGION="us-east-1"
S3_BUCKET="zos"
S3_PREFIX="flists/zosbuilder"
S3_ACCESS_KEY="YOUR_ACCESS_KEY"
S3_SECRET_KEY="YOUR_SECRET_KEY"
```
#### Upload Kernel
```bash
# Enable kernel upload
UPLOAD_KERNEL=true ./scripts/build.sh
# Custom kernel subpath (default: kernel)
KERNEL_SUBPATH=kernels UPLOAD_KERNEL=true ./scripts/build.sh
```
**Uploaded files:**
- `s3://{bucket}/{prefix}/kernel/vmlinuz-{VERSION}-{ZINIT_HASH}.efi` - Versioned kernel
- `s3://{bucket}/{prefix}/kernel/kernels.txt` - Text index (one kernel per line)
- `s3://{bucket}/{prefix}/kernel/kernels.json` - JSON index with metadata
**Index files:**
The build automatically generates and uploads index files listing all available kernels in the S3 bucket. This enables:
- Easy kernel selection in web UIs (dropdown menus)
- Programmatic access without S3 API listing
- Metadata like upload timestamp and kernel count (JSON format)
**JSON index format:**
```json
{
"kernels": [
"vmlinuz-6.12.44-Zero-OS-abc1234.efi",
"vmlinuz-6.12.44-Zero-OS-def5678.efi"
],
"updated": "2025-01-04T12:00:00Z",
"count": 2
}
```
#### Upload RFS Flists
```bash
# Enable flist uploads
UPLOAD_MANIFESTS=true ./scripts/build.sh
```
Uploaded as:
- `s3://{bucket}/{prefix}/manifests/modules-{VERSION}.fl`
- `s3://{bucket}/{prefix}/manifests/firmware-{TAG}.fl`
#### Requirements
- MinIO Client (`mcli` or `mc`) must be installed
- Valid S3 credentials in `config/rfs.conf`
### Container Builds
Build in isolated container:
@@ -369,7 +441,7 @@ podman run --rm \
./scripts/build.sh
```
### Cross-Platform Support
### Cross-Platform Support (totally untestd)
The build system supports multiple architectures:
@@ -419,15 +491,6 @@ export DEBUG=1
./scripts/build.sh
```
### Size Analysis
```bash
# Analyze initramfs contents
./scripts/analyze-size.sh
# Show largest files
find initramfs/ -type f -exec du -h {} \; | sort -rh | head -20
```
## Contributing
1. **Fork** the repository
@@ -440,13 +503,16 @@ find initramfs/ -type f -exec du -h {} \; | sort -rh | head -20
```bash
# Setup development environment
./scripts/setup-dev.sh
./scripts/dev-container.sh start
# Run tests
./scripts/test.sh --all
# Run incremental build
./scripts/build.sh
# Check size impact
./scripts/analyze-size.sh --compare
# Test with QEMU
./runit.sh --hypervisor qemu
# Test with cloud-hypervisor
./runit.sh --hypervisor ch
```
## License

523
claude.md Normal file
View File

@@ -0,0 +1,523 @@
# Claude Code Reference for Zero-OS Builder
This document provides essential context for Claude Code (or any AI assistant) working with this Zero-OS Alpine Initramfs Builder repository.
## Project Overview
**What is this?**
A sophisticated build system for creating custom Alpine Linux 3.22 x86_64 initramfs images with zinit process management, designed for Zero-OS deployment on ThreeFold Grid.
**Key Features:**
- Container-based reproducible builds (rootless podman/docker)
- Incremental staged build pipeline with completion markers
- zinit process manager (complete OpenRC replacement)
- RFS (Remote File System) for lazy-loading modules/firmware from S3
- Rust components built with musl static linking
- Aggressive size optimization (strip + UPX)
- Embedded initramfs in kernel (single vmlinuz.efi output)
## Repository Structure
```
zosbuilder/
├── config/ # All configuration files
│ ├── build.conf # Build settings (versions, paths, flags)
│ ├── packages.list # Alpine packages to install
│ ├── sources.conf # ThreeFold components to build
│ ├── modules.conf # 2-stage kernel module loading
│ ├── firmware.conf # Firmware to include in initramfs
│ ├── kernel.config # Linux kernel configuration
│ ├── init # /init script for initramfs
│ └── zinit/ # zinit service definitions (YAML)
├── scripts/
│ ├── build.sh # Main orchestrator (DO NOT EDIT LIGHTLY)
│ ├── clean.sh # Clean all artifacts
│ ├── dev-container.sh # Persistent dev container manager
│ ├── rebuild-after-zinit.sh # Quick rebuild helper
│ ├── lib/ # Modular build libraries
│ │ ├── common.sh # Logging, path normalization, utilities
│ │ ├── stages.sh # Incremental stage tracking
│ │ ├── docker.sh # Container lifecycle
│ │ ├── alpine.sh # Alpine extraction, packages, cleanup
│ │ ├── components.sh # Build Rust components from sources.conf
│ │ ├── initramfs.sh # Assembly, optimization, CPIO creation
│ │ └── kernel.sh # Kernel download, config, build, embed
│ └── rfs/ # RFS flist generation scripts
│ ├── common.sh # S3 config, version computation
│ ├── pack-modules.sh # Create modules flist
│ ├── pack-firmware.sh # Create firmware flist
│ └── verify-flist.sh # Inspect/test flists
├── docs/ # Detailed documentation
│ ├── NOTES.md # Operational knowledge & troubleshooting
│ ├── PROMPT.md # Agent guidance (strict debugger mode)
│ ├── TODO.md # Persistent checklist with code refs
│ ├── AGENTS.md # Quick reference for agents
│ ├── rfs-flists.md # RFS design and runtime flow
│ ├── review-rfs-integration.md # Integration points
│ └── depmod-behavior.md # Module dependency details
├── runit.sh # Test runner (QEMU/cloud-hypervisor)
├── initramfs/ # Generated initramfs tree
├── components/ # Generated component sources
├── kernel/ # Generated kernel source
├── dist/ # Final outputs
│ ├── vmlinuz.efi # Kernel with embedded initramfs
│ └── initramfs.cpio.xz # Standalone initramfs archive
└── .build-stages/ # Incremental build markers (*.done files)
```
## Core Concepts
### 1. Incremental Staged Builds
**How it works:**
- Each stage creates a `.build-stages/<stage_name>.done` marker on success
- Subsequent builds skip completed stages unless forced
- Use `./scripts/build.sh --show-stages` to see status
- Use `./scripts/build.sh --rebuild-from=<stage>` to restart from a specific stage
- Manually remove `.done` files to re-run specific stages
**Build stages (in order):**
```
alpine_extract → alpine_configure → alpine_packages → alpine_firmware
→ components_build → components_verify → kernel_modules
→ init_script → components_copy → zinit_setup
→ modules_setup → modules_copy → cleanup → rfs_flists
→ validation → initramfs_create → initramfs_test → kernel_build
```
**Key insight:** The build ALWAYS runs inside a container. Host invocations auto-spawn containers.
### 2. Container-First Architecture
**Why containers?**
- Reproducible toolchain (Alpine 3.22 base with exact dependencies)
- Rootless execution (no privileged access needed)
- Isolation from host environment
- GitHub Actions compatible
**Container modes:**
- **Transient:** `./scripts/build.sh` spawns, builds, exits
- **Persistent:** `./scripts/dev-container.sh start/shell/build`
**Important:** Directory paths are normalized to absolute PROJECT_ROOT to avoid CWD issues when stages change directories (especially kernel builds).
### 3. Component Build System
**sources.conf format:**
```
TYPE:NAME:URL:VERSION:BUILD_FUNCTION[:EXTRA]
```
**Example:**
```bash
git:zinit:https://github.com/threefoldtech/zinit:master:build_zinit
git:rfs:https://github.com/threefoldtech/rfs:development:build_rfs
git:mycelium:https://github.com/threefoldtech/mycelium:0.6.1:build_mycelium
release:corex:https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static:2.1.4:install_corex:rename=corex
```
**Build functions** are defined in `scripts/lib/components.sh` and handle:
- Rust builds with `x86_64-unknown-linux-musl` target
- Static linking via `RUSTFLAGS="-C target-feature=+crt-static"`
- Special cases (e.g., mycelium builds in `myceliumd/` subdirectory)
### 4. RFS Flists (Remote File System)
**Purpose:** Lazy-load kernel modules and firmware from S3 at runtime
**Flow:**
1. Build stage creates flists: `modules-<KERNEL_VERSION>.fl` and `firmware-<TAG>.fl`
2. Flists are SQLite databases containing:
- Content-addressed blob references
- S3 store URIs (patched for read-only access)
- Directory tree metadata
3. Flists embedded in initramfs at `/etc/rfs/`
4. Runtime: zinit units mount flists over `/lib/modules/` and `/lib/firmware/`
5. Dual udev coldplug: early (before RFS) for networking, post-RFS for new hardware
**Key files:**
- `scripts/rfs/pack-modules.sh` - Creates modules flist from container `/lib/modules/`
- `scripts/rfs/pack-firmware.sh` - Creates firmware flist from Alpine packages
- `config/zinit/init/modules.sh` - Runtime mount script
- `config/zinit/init/firmware.sh` - Runtime mount script
### 5. zinit Service Management
**No OpenRC:** This system uses zinit exclusively for process management.
**Service graph:**
```
/init → zinit → [stage1-modules, udevd, depmod]
→ udev-trigger (early coldplug)
→ network
→ rfs-modules + rfs-firmware (mount flists)
→ udev-rfs (post-RFS coldplug)
→ services
```
**Service definitions:** YAML files in `config/zinit/` with `after:`, `needs:`, `wants:` dependencies
### 6. Kernel Versioning and S3 Upload
**Versioned Kernel Output:**
- Standard kernel: `dist/vmlinuz.efi` (for compatibility)
- Versioned kernel: `dist/vmlinuz-{VERSION}-{ZINIT_HASH}.efi`
- Example: `vmlinuz-6.12.44-Zero-OS-a1b2c3d.efi`
**Version components:**
- `{VERSION}`: Full kernel version from `KERNEL_VERSION` + `CONFIG_LOCALVERSION`
- `{ZINIT_HASH}`: Short git commit hash from `components/zinit/.git`
**S3 Upload (optional):**
- Controlled by `UPLOAD_KERNEL=true` environment variable
- Uses MinIO client (`mcli` or `mc`) to upload to S3-compatible storage
- Uploads versioned kernel to: `s3://{bucket}/{prefix}/kernel/{versioned_filename}`
**Kernel Index Generation:**
After uploading, automatically generates and uploads index files:
- `kernels.txt` - Plain text, one kernel per line, sorted reverse chronologically
- `kernels.json` - JSON format with metadata (timestamp, count)
**Why index files?**
- S3 web interfaces often don't support directory listings
- Enables dropdown menus in web UIs without S3 API access
- Provides kernel discovery for deployment tools
**JSON index structure:**
```json
{
"kernels": ["vmlinuz-6.12.44-Zero-OS-abc1234.efi", ...],
"updated": "2025-01-04T12:00:00Z",
"count": 2
}
```
**Key functions:**
- `get_git_commit_hash()` in `scripts/lib/common.sh` - Extracts git hash
- `kernel_build_with_initramfs()` in `scripts/lib/kernel.sh` - Creates versioned kernel
- `kernel_upload_to_s3()` in `scripts/lib/kernel.sh` - Uploads to S3
- `kernel_generate_index()` in `scripts/lib/kernel.sh` - Generates and uploads index
## Critical Conventions
### Path Normalization
**Problem:** Stages can change CWD (kernel build uses `/workspace/kernel/current`)
**Solution:** All paths normalized to absolute at startup in `scripts/lib/common.sh:244`
**Variables affected:**
- `INSTALL_DIR` (initramfs/)
- `COMPONENTS_DIR` (components/)
- `KERNEL_DIR` (kernel/)
- `DIST_DIR` (dist/)
**Never use relative paths** when calling functions that might be in different CWDs.
### Branding and Security
**Passwordless root enforcement:**
- Applied in `scripts/lib/initramfs.sh:575` via `passwd -d -R "${initramfs_dir}" root`
- Creates `root::` in `/etc/shadow` (empty password field)
- Controlled by `ZEROOS_BRANDING` and `ZEROOS_PASSWORDLESS_ROOT` flags
**Never edit /etc/shadow manually** - always use `passwd` or `chpasswd` with chroot.
### Module Loading Strategy
**2-stage approach:**
- **Stage 1:** Critical boot modules (virtio, e1000, scsi) - loaded by zinit early
- **Stage 2:** Extended hardware (igb, ixgbe, i40e) - loaded after network
**Config:** `config/modules.conf` with `stage1:` and `stage2:` prefixes
**Dependency resolution:**
- Uses `modinfo` to build dependency tree
- Resolves from container `/lib/modules/<FULL_VERSION>/`
- Must run after `kernel_modules` stage
### Firmware Policy
**For initramfs:** `config/firmware.conf` is the SINGLE source of truth
- Any firmware hints in `modules.conf` are IGNORED
- Prevents duplication/version mismatches
**For RFS:** Full Alpine `linux-firmware*` packages installed in container
- Packed from container `/lib/firmware/`
- Overmounts at runtime for extended hardware
## Common Workflows
### Full Build from Scratch
```bash
# Clean everything and rebuild
./scripts/build.sh --clean
# Or just rebuild all stages
./scripts/build.sh --force-rebuild
```
### Quick Iteration After Config Changes
```bash
# After editing zinit configs, init script, or modules.conf
./scripts/rebuild-after-zinit.sh
# With kernel rebuild
./scripts/rebuild-after-zinit.sh --with-kernel
# Dry-run to see what changed
./scripts/rebuild-after-zinit.sh --verify-only
```
### Minimal Manual Rebuild
```bash
# Remove specific stages
rm -f .build-stages/initramfs_create.done
rm -f .build-stages/validation.done
# Rebuild only those stages
DEBUG=1 ./scripts/build.sh
```
### Testing the Built Kernel
```bash
# QEMU (default)
./runit.sh
# cloud-hypervisor with 5 disks
./runit.sh --hypervisor ch --disks 5 --reset
# Custom memory and bridge
./runit.sh --memory 4096 --bridge zosbr
```
### Persistent Dev Container
```bash
# Start persistent container
./scripts/dev-container.sh start
# Enter shell
./scripts/dev-container.sh shell
# Run build inside
./scripts/dev-container.sh build
# Stop container
./scripts/dev-container.sh stop
```
## Debugging Guidelines
### Diagnostics-First Approach
**ALWAYS add diagnostics before fixes:**
1. Enable `DEBUG=1` for verbose safe_execute logs
2. Add strategic `log_debug` statements
3. Confirm hypothesis in logs
4. Then apply minimal fix
**Example:**
```bash
# Bad: Guess and fix
Edit file to fix suspected issue
# Good: Diagnose first
1. Add log_debug "Variable X=${X}, resolved=${resolved_path}"
2. DEBUG=1 ./scripts/build.sh
3. Confirm in output
4. Apply fix with evidence
```
### Key Diagnostic Functions
- `scripts/lib/common.sh`: `log_info`, `log_warn`, `log_error`, `log_debug`
- `scripts/lib/initramfs.sh:820`: Validation debug prints (input, PWD, PROJECT_ROOT, resolved paths)
- `scripts/lib/initramfs.sh:691`: Pre-CPIO sanity checks with file listings
### Common Issues and Solutions
**"Initramfs directory not found"**
- **Cause:** INSTALL_DIR interpreted as relative in different CWD
- **Fix:** Already patched - paths normalized at startup
- **Check:** Look for "Validation debug:" logs showing resolved paths
**"INITRAMFS_ARCHIVE unbound"**
- **Cause:** Incremental build skipped initramfs_create stage
- **Fix:** Already patched - stages default INITRAMFS_ARCHIVE if unset
- **Check:** `scripts/build.sh:401` logs "defaulting INITRAMFS_ARCHIVE"
**"Module dependency resolution fails"**
- **Cause:** Container `/lib/modules/<FULL_VERSION>` missing or stale
- **Fix:** `./scripts/rebuild-after-zinit.sh --refresh-container-mods`
- **Check:** Ensure `kernel_modules` stage completed successfully
**"Passwordless root not working"**
- **Cause:** Branding disabled or shadow file not updated
- **Fix:** Check ZEROOS_BRANDING=true in logs, verify /etc/shadow has `root::`
- **Verify:** Extract initramfs and `grep '^root:' etc/shadow`
## Important Files Quick Reference
### Must-Read Before Editing
- `scripts/build.sh` - Orchestrator with precise stage order
- `scripts/lib/common.sh` - Path normalization, logging, utilities
- `scripts/lib/stages.sh` - Stage tracking logic
- `config/build.conf` - Version pins, directory settings, flags
### Safe to Edit
- `config/zinit/*.yaml` - Service definitions
- `config/zinit/init/*.sh` - Runtime initialization scripts
- `config/modules.conf` - Module lists (stage1/stage2)
- `config/firmware.conf` - Initramfs firmware selection
- `config/packages.list` - Alpine packages
### Generated (Never Edit)
- `initramfs/` - Assembled initramfs tree
- `components/` - Downloaded component sources
- `kernel/` - Kernel source tree
- `dist/` - Build outputs
- `.build-stages/` - Completion markers
## Testing Architecture
**No built-in tests during build** - Tests run separately via `runit.sh`
**Why?**
- Build is for assembly, not validation
- Tests require hypervisor (QEMU/cloud-hypervisor)
- Separation allows faster iteration
**runit.sh features:**
- Multi-disk support (qcow2 for QEMU, raw for cloud-hypervisor)
- Network bridge/TAP configuration
- Persistent volumes (reset with `--reset`)
- Serial console logging
## Quick Command Reference
```bash
# Build
./scripts/build.sh # Incremental build
./scripts/build.sh --clean # Clean build
./scripts/build.sh --show-stages # Show completion status
./scripts/build.sh --rebuild-from=zinit_setup # Rebuild from stage
DEBUG=1 ./scripts/build.sh # Verbose output
# Rebuild helpers
./scripts/rebuild-after-zinit.sh # After zinit/init/modules changes
./scripts/rebuild-after-zinit.sh --with-kernel # Also rebuild kernel
./scripts/rebuild-after-zinit.sh --verify-only # Dry-run
# Testing
./runit.sh # QEMU test
./runit.sh --hypervisor ch # cloud-hypervisor test
./runit.sh --help # All options
# Dev container
./scripts/dev-container.sh start # Start persistent container
./scripts/dev-container.sh shell # Enter shell
./scripts/dev-container.sh build # Build inside container
./scripts/dev-container.sh stop # Stop container
# Cleanup
./scripts/clean.sh # Remove all generated files
rm -rf .build-stages/ # Reset stage markers
```
## Environment Variables
**Build control:**
- `DEBUG=1` - Enable verbose logging
- `FORCE_REBUILD=true` - Force rebuild all stages
- `REBUILD_FROM_STAGE=<name>` - Rebuild from specific stage
**Version overrides:**
- `ALPINE_VERSION=3.22` - Alpine Linux version
- `KERNEL_VERSION=6.12.44` - Linux kernel version
- `RUST_TARGET=x86_64-unknown-linux-musl` - Rust compilation target
**Firmware tagging:**
- `FIRMWARE_TAG=20250908` - Firmware flist version tag
**S3 upload control:**
- `UPLOAD_KERNEL=true` - Upload versioned kernel to S3 (default: false)
- `UPLOAD_MANIFESTS=true` - Upload RFS flists to S3 (default: false)
- `KERNEL_SUBPATH=kernel` - S3 subpath for kernel uploads (default: kernel)
**S3 configuration:**
- See `config/rfs.conf` for S3 endpoint, credentials, paths
- Used by both RFS flist uploads and kernel uploads
## Documentation Hierarchy
**Start here:**
1. `README.md` - User-facing guide with features and setup
2. This file (`claude.md`) - AI assistant context
**For development:**
3. `docs/NOTES.md` - Operational knowledge, troubleshooting
4. `docs/AGENTS.md` - Quick agent reference
5. `docs/TODO.md` - Current work checklist with code links
**For deep dives:**
6. `docs/PROMPT.md` - Strict debugger agent mode (diagnostics-first)
7. `docs/rfs-flists.md` - RFS design and implementation
8. `docs/review-rfs-integration.md` - Integration points analysis
9. `docs/depmod-behavior.md` - Module dependency deep dive
**Historical:**
10. `IMPLEMENTATION_PLAN.md` - Original design document
11. `GITHUB_ACTIONS.md` - CI/CD setup guide
## Project Philosophy
1. **Reproducibility:** Container-based builds ensure identical results
2. **Incrementality:** Stage markers minimize rebuild time
3. **Diagnostics-first:** Log before fixing, validate assumptions
4. **Minimal intervention:** Alpine + zinit only, no systemd/OpenRC
5. **Size-optimized:** Aggressive cleanup, strip, UPX compression
6. **Remote-ready:** RFS enables lazy-loading for extended hardware support
## Commit Message Guidelines
**DO NOT add Claude Code or AI assistant references to commit messages.**
Keep commits clean and professional:
- Focus on what changed and why
- Use conventional commit prefixes: `build:`, `docs:`, `fix:`, `feat:`, `refactor:`
- Be concise but descriptive
- No emoji unless project convention
- No "Generated with Claude Code" or "Co-Authored-By: Claude" footers
**Good example:**
```
build: remove testing.sh in favor of runit.sh
Replace inline boot testing with standalone runit.sh runner.
Tests now run separately from build pipeline for faster iteration.
```
**Bad example:**
```
build: remove testing.sh 🤖
Made some changes to testing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
```
## Keywords for Quick Search
- **Build fails:** Check `DEBUG=1` logs, stage completion markers, container state
- **Module issues:** `kernel_modules` stage, `CONTAINER_MODULES_PATH`, depmod logs
- **Firmware missing:** `config/firmware.conf` for initramfs, RFS flist for runtime
- **zinit problems:** Service YAML syntax, dependency order, init script errors
- **Path errors:** Absolute path normalization in `common.sh:244`
- **Size too large:** Check cleanup stage, strip/UPX execution, package list
- **Container issues:** Rootless setup, subuid/subgid, podman vs docker
- **RFS mount fails:** S3 credentials, network readiness, flist manifest paths
- **Kernel upload:** `UPLOAD_KERNEL=true`, requires `config/rfs.conf`, MinIO client (`mcli`/`mc`)
- **Kernel index:** Auto-generated `kernels.txt`/`kernels.json` for dropdown UIs, updated on upload
---
**Last updated:** 2025-01-04
**Maintainer notes:** This file is the entry point for AI assistants. Keep it updated when architecture changes. Cross-reference with `docs/NOTES.md` for operational details.

View File

@@ -3,7 +3,7 @@
# System versions
ALPINE_VERSION="3.22"
KERNEL_VERSION="6.12.44"
KERNEL_VERSION="6.12.49"
# Rust configuration
RUST_TARGET="x86_64-unknown-linux-musl"
@@ -61,6 +61,7 @@ ENABLE_STRIP="true"
ENABLE_UPX="true"
ENABLE_AGGRESSIVE_CLEANUP="true"
ENABLE_2STAGE_MODULES="true"
UPLOAD_KERNEL=true
# Debug and development
DEBUG_DEFAULT="0"

View File

@@ -2,12 +2,15 @@
# Alpine Linux provides separate firmware packages for hardware support
# Format: FIRMWARE_PACKAGE:DESCRIPTION
# Essential network firmware packages
linux-firmware-bnx2:Broadcom NetXtreme firmware
# Essential network firmware packages (wired NICs matching stage1 drivers)
linux-firmware-bnx2:Broadcom NetXtreme (bnx2) firmware
linux-firmware-bnx2x:Broadcom NetXtreme II (bnx2x) firmware
linux-firmware-tigon:Broadcom tg3 (Tigon) firmware
linux-firmware-e100:Intel PRO/100 firmware
linux-firmware-intel:Intel network and WiFi firmware (includes e1000e, igb, ixgbe, i40e, ice)
linux-firmware-realtek:Realtek network firmware (r8169, etc.)
linux-firmware-qlogic:QLogic network firmware
linux-firmware-intel:Intel NIC firmware (covers e1000e, igb, ixgbe, i40e, ice)
linux-firmware-rtl_nic:Realtek NIC firmware (r8169, etc.)
linux-firmware-realtek:Realtek NIC firmware (meta)
linux-firmware-qlogic:QLogic NIC firmware
# Storage firmware (if needed)
# linux-firmware-marvell:Marvell storage/network firmware (not available in Alpine 3.22)

View File

@@ -1,4 +1,4 @@
#!/bin/sh -x
#!/bin/sh
# Alpine-based Zero-OS Init Script
# Maintains identical flow to original busybox version
@@ -24,6 +24,9 @@ mount -t devtmpfs devtmpfs /dev
mkdir -p /dev/pts
mount -t devpts devpts /dev/pts
# Re-initialize modules dependencies for basic init
depmod -a
echo "[+] building ram filesystem"
@@ -46,6 +49,8 @@ echo " copying /var..."
cp -ar /var $target
echo " copying /run..."
cp -ar /run $target
echo " creating /home"
mkdir $target/home
# Create essential directories
mkdir -p $target/dev
@@ -71,48 +76,66 @@ if [ -x /sbin/udevd ]; then
echo " starting udevd..."
udevd --daemon
# Preload keyboard input modules early so console works before zinit and rfs mounts
echo "[+] preloading keyboard input modules"
for m in i8042 atkbd usbhid hid hid_generic evdev xhci_pci xhci_hcd ehci_pci ehci_hcd ohci_pci ohci_hcd uhci_hcd; do
modprobe "$m"
done
echo "[+] loading essential drivers"
# Load core drivers for storage and network
/bin/busybox sh -l
modprobe btrfs 2>/dev/null || true
modprobe fuse 2>/dev/null || true
modprobe overlay 2>/dev/null || true
# Load storage drivers
modprobe ahci 2>/dev/null || true
modprobe nvme 2>/dev/null || true
modprobe virtio_blk 2>/dev/null || true
modprobe virtio_scsi 2>/dev/null || true
modprobe virtio_pci 2>/dev/null || true
# Load network drivers
modprobe virtio_net 2>/dev/null || true
echo " triggering device discovery..."
udevadm trigger --action=add --type=subsystems
udevadm trigger --action=add --type=devices
udevadm trigger --action=add
udevadm settle
echo " stopping udevd..."
kill $(pidof udevd) || true
else
echo " warning: udevd not found, skipping hardware detection"
fi
echo "[+] loading essential drivers"
# Load core drivers for storage and network
modprobe btrfs 2>/dev/null || true
modprobe fuse 2>/dev/null || true
modprobe overlay 2>/dev/null || true
# Load storage drivers
modprobe ahci 2>/dev/null || true
modprobe nvme 2>/dev/null || true
modprobe virtio_blk 2>/dev/null || true
modprobe virtio_scsi 2>/dev/null || true
modprobe virtio_pci 2>/dev/null || true
# Load network drivers
modprobe virtio_net 2>/dev/null || true
modprobe e1000 2>/dev/null || true
modprobe e1000e 2>/dev/null || true
echo "[+] debug hook: initdebug=true or /init-debug"
if grep -qw "initdebug=true" /proc/cmdline; then
if [ -x /init-debug ]; then
echo " initdebug=true: executing /init-debug ..."
sh /init-debug
else
echo " initdebug=true: starting interactive shell (no /init-debug). Type 'exit' to continue."
debug="-d"
/bin/busybox sh
fi
elif [ -x /init-debug ]; then
echo " executing /init-debug ..."
sh /init-debug
fi
# Unmount init filesystems
umount /proc 2>/dev/null || true
umount /sys 2>/dev/null || true
echo "[+] checking for debug files"
if [ -e /init-debug ]; then
echo " executing debug script..."
sh /init-debug
fi
echo "[+] switching root"
mkdir /root/home
echo " exec switch_root /mnt/root /sbin/zinit init"
exec switch_root /mnt/root /sbin/zinit init
exec switch_root /mnt/root /sbin/zinit ${debug} init
# switch_root failed, drop into shell
/bin/busybox sh
##

File diff suppressed because it is too large Load Diff

View File

@@ -1,40 +1,77 @@
# Module loading specification for Zero-OS Alpine initramfs
# Format: STAGE:MODULE_NAME:FIRMWARE_PACKAGE (optional)
# Focus on most common NIC modules to ensure networking works on most hardware
# Format: STAGE:MODULE
# Firmware selection is authoritative in config/firmware.conf; do not add firmware hints here.
# Stage 1: Core subsystems + networking and essential boot modules
# All core subsystems and NICs must be loaded BEFORE network can come up
stage1:virtio:none # Core virtio subsystem (REQUIRED)
stage1:virtio_ring:none # Virtio ring buffer (REQUIRED)
stage1:virtio_pci:none # Virtio PCI bus
stage1:virtio_net:none # Virtio network (VMs, cloud)
stage1:virtio_scsi:none # Virtio SCSI (VMs, cloud)
stage1:virtio_blk:none # Virtio block (VMs, cloud)
stage1:e1000:linux-firmware-intel # Intel E1000 (very common)
stage1:e1000e:linux-firmware-intel # Intel E1000E (very common)
stage1:r8169:linux-firmware-realtek # Realtek (most common desktop/server)
stage1:realtek:linux-firmware-realtek # Realtek (most common desktop/server)
stage1:igb:linux-firmware-intel # Intel Gigabit (servers)
stage1:ixgbe:linux-firmware-intel # Intel 10GbE (servers)
stage1:i40e:linux-firmware-intel # Intel 40GbE (modern servers)
stage1:ice:linux-firmware-intel # Intel E800 series (latest)
stage1:8139too:none # Realtek 8139 (legacy)
stage1:8139cp:none # Realtek 8139C+ (legacy)
stage1:bnx2:linux-firmware-bnx2 # Broadcom NetXtreme
stage1:bnx2x:linux-firmware-bnx2 # Broadcom NetXtreme II
stage1:tg3:none # Broadcom Tigon3
stage1:b44:none # Broadcom 44xx
stage1:atl1:none # Atheros L1
stage1:atl1e:none # Atheros L1E
stage1:atl1c:none # Atheros L1C
stage1:alx:none # Atheros Alx
stage1:libata:none # Core ATA subsystem (REQUIRED)
stage1:scsi_mod:none # SCSI subsystem
stage1:sd_mod:none # SCSI disk support
stage1:ahci:none # SATA AHCI
stage1:nvme_core:none # Core NVMe subsystem (REQUIRED)
stage1:nvme:none # NVMe storage
stage1:tun:none # TUN/TAP for networking
stage1:overlay:none # OverlayFS for containers
stage1:fuse:none # OverlayFS for containers
# Stage 1: boot-critical storage, core virtio, networking, overlay, and input (USB/HID/keyboard)
# Storage
stage1:libata
stage1:libahci
stage1:ahci
stage1:scsi_mod
stage1:sd_mod
stage1:nvme_core
stage1:nvme
stage1:virtio_blk
stage1:virtio_scsi
# Core virtio
stage1:virtio
stage1:virtio_ring
stage1:virtio_pci
stage1:virtio_pci_legacy_dev
stage1:virtio_pci_modern_dev
# Networking (common NICs)
stage1:virtio_net
stage1:e1000
stage1:e1000e
stage1:igb
stage1:ixgbe
stage1:igc
stage1:i40e
stage1:ice
stage1:r8169
stage1:8139too
stage1:8139cp
stage1:bnx2
stage1:bnx2x
stage1:tg3
stage1:tun
# PHY drivers
stage1:realtek
# Broadcom PHY families (required for many Broadcom NICs)
stage1:broadcom
stage1:bcm7xxx
stage1:bcm87xx
stage1:bcm_phy_lib
stage1:bcm_phy_ptp
# Filesystems / overlay
stage1:overlay
stage1:fuse
# USB host controllers and HID/keyboard input
stage1:xhci_pci
stage1:xhci_hcd
stage1:ehci_pci
stage1:ehci_hcd
stage1:ohci_pci
stage1:ohci_hcd
stage1:uhci_hcd
stage1:usbhid
stage1:hid_generic
stage1:hid
stage1:atkbd
stage1:libps2
stage1:i8042
stage1:evdev
stage1:serio_raw
stage1:serio
# zos core networking is with openvswitch vxlan over mycelium
openvswitch
# Keep stage2 empty; we only use stage1 in this build
# stage2: (intentionally unused)

View File

@@ -16,6 +16,9 @@ eudev-libs
eudev-netifnames
kmod
fuse3
pciutils
efitools
efibootmgr
# Console/terminal management
util-linux
@@ -29,6 +32,7 @@ nftables
# Filesystem support (minimal)
btrfs-progs
dosfstools
sgdisk
# Essential libraries only
zlib
@@ -45,7 +49,3 @@ haveged
openssh-server
zellij
# Essential debugging and monitoring tools included
# NO development tools, NO curl/wget, NO python, NO redis
# NO massive linux-firmware package
# Other tools will be loaded from RFS after network connectivity

View File

@@ -8,12 +8,12 @@
S3_ENDPOINT="https://hub.grid.tf"
# AWS region string expected by the S3-compatible API
S3_REGION="us-east-1"
S3_REGION="garage"
# Bucket and key prefix used for RFS store (content-addressed blobs)
# The RFS store path will be: s3://.../<S3_BUCKET>/<S3_PREFIX>
S3_BUCKET="zos"
S3_PREFIX="zosbuilder/store"
S3_PREFIX="zos/store"
# Access credentials (required by rfs pack to push blobs)
S3_ACCESS_KEY="REPLACE_ME"
@@ -36,10 +36,10 @@ MANIFESTS_SUBPATH="manifests"
# Behavior flags (can be overridden by CLI flags or env)
# Whether to keep s3:// store as a fallback entry in the .fl after adding WEB_ENDPOINT
KEEP_S3_FALLBACK="false"
KEEP_S3_FALLBACK="true"
# Whether to attempt uploading .fl manifests to S3 (requires MinIO Client: mc)
UPLOAD_MANIFESTS="false"
UPLOAD_MANIFESTS="true"
# Read-only credentials for route URL in manifest (optional; defaults to write keys above)
# These will be embedded into the flist 'route.url' so runtime mounts can read directly from Garage.
@@ -53,5 +53,27 @@ READ_SECRET_KEY="REPLACE_ME_READ"
# - ROUTE_PATH: path to the blob route (default: /blobs)
# - ROUTE_REGION: region string for Garage (default: garage)
ROUTE_ENDPOINT="https://hub.grid.tf"
ROUTE_PATH="/blobs"
ROUTE_PATH="/zos/store"
ROUTE_REGION="garage"
# RESP/DB-style blob store (design-time placeholders; optional)
# Enable to allow pack scripts or future rfs CLI to upload blobs to a RESP-compatible store.
# This does not change the existing S3 flow; RESP acts as an additional backend.
#
# Example URI semantics (see docs/rfs-flists.md additions):
# resp://host:port/db?prefix=blobs
# resp+tls://host:port/db?prefix=blobs&amp;ca=/etc/ssl/certs/ca.pem
# resp+sentinel://sentinelHost:26379/mymaster?prefix=blobs
#
# Minimal keys for a direct RESP endpoint
RESP_ENABLED="false"
RESP_ENDPOINT="localhost:6379" # host:port
RESP_DB="0" # integer DB index
RESP_PREFIX="zos/blobs" # namespace/prefix for content-addressed keys
RESP_USERNAME="" # optional
RESP_PASSWORD="" # optional
RESP_TLS="false" # true/false
RESP_CA="" # path to CA bundle when RESP_TLS=true
# Optional: Sentinel topology (overrides RESP_ENDPOINT for discovery)
RESP_SENTINEL="" # sentinelHost:port (comma-separated for multiple)
RESP_MASTER="" # Sentinel master name (e.g., "mymaster")

View File

@@ -4,7 +4,9 @@
# Git repositories to clone and build
git zinit https://github.com/threefoldtech/zinit master build_zinit
git mycelium https://github.com/threefoldtech/mycelium v0.6.1 build_mycelium
git zosstorage https://git.ourworld.tf/delandtj/zosstorage main build_zosstorage
git youki git@github.com:youki-dev/youki.git v0.5.7 build_youki
git rfs https://github.com/threefoldtech/rfs development build_rfs
# Pre-built releases to download
release rfs https://github.com/threefoldtech/rfs/releases/download/v2.0.6/rfs v2.0.6 install_rfs
# release rfs https://github.com/threefoldtech/rfs/releases/download/v2.0.6/rfs v2.0.6 install_rfs
release corex https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static 2.1.4 install_corex rename=corex

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/cgroup.sh
oneshot: true

View File

@@ -1 +0,0 @@
exec: depmod -a

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty -L 115200 ttyS0 vt100
restart: always

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty console linux
restart: always

View File

@@ -1,2 +0,0 @@
exec: haveged -w 1024 -d 32 -i 32 -v 1
oneshot: true

View File

@@ -1,6 +0,0 @@
#!/bin/bash
echo "start ash terminal"
while true; do
getty -l /bin/ash -n 19200 tty2
done

View File

@@ -1,10 +0,0 @@
set -x
mount -t tmpfs cgroup_root /sys/fs/cgroup
subsys="pids cpuset cpu cpuacct blkio memory devices freezer net_cls perf_event net_prio hugetlb"
for sys in $subsys; do
mkdir -p /sys/fs/cgroup/$sys
mount -t cgroup $sys -o $sys /sys/fs/cgroup/$sys/
done

View File

@@ -1,10 +0,0 @@
#!/bin/bash
modprobe fuse
modprobe btrfs
modprobe tun
modprobe br_netfilter
echo never > /sys/kernel/mm/transparent_hugepage/enabled
ulimit -n 524288

View File

@@ -1,10 +0,0 @@
#!/bin/sh
ntp_flags=$(grep -o 'ntp=.*' /proc/cmdline | sed 's/^ntp=//')
params=""
if [ -n "$ntp_flags" ]; then
params=$(echo "-p $ntp_flags" | sed s/,/' -p '/g)
fi
exec ntpd -n $params

View File

@@ -1,4 +0,0 @@
#!/bin/bash
echo "Enable ip forwarding"
echo 1 > /proc/sys/net/ipv4/ip_forward

View File

@@ -1,3 +0,0 @@
#!/bin/sh
mkdir /dev/shm
mount -t tmpfs shm /dev/shm

View File

@@ -1,15 +0,0 @@
#!/bin/ash
if [ -f /etc/ssh/ssh_host_rsa_key ]; then
# ensure existing file permissions
chown root:root /etc/ssh/ssh_host_*
chmod 600 /etc/ssh/ssh_host_*
exit 0
fi
echo "Setting up sshd"
mkdir -p /run/sshd
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
ssh-keygen -f /etc/ssh/ssh_host_ecdsa_key -N '' -t ecdsa -b 521
ssh-keygen -f /etc/ssh/ssh_host_ed25519_key -N '' -t ed25519

View File

@@ -1,4 +0,0 @@
#!/bin/sh
udevadm trigger --action=add
udevadm settle

View File

@@ -1,2 +0,0 @@
exec: ip l set lo up
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/modprobe.sh
oneshot: true

View File

@@ -1,6 +0,0 @@
exec: /usr/bin/mycelium --key-file /tmp/mycelium_priv_key.bin
--tun-name my0 --silent --peers tcp://188.40.132.242:9651 tcp://136.243.47.186:9651
tcp://185.69.166.7:9651 tcp://185.69.166.8:9651 tcp://65.21.231.58:9651 tcp://65.109.18.113:9651
tcp://209.159.146.190:9651 tcp://5.78.122.16:9651 tcp://5.223.43.251:9651 tcp://142.93.217.194:9651
after:
- network

View File

@@ -1,5 +0,0 @@
exec: dhcpcd eth0
after:
- depmod
- udevd
- udev-trigger

View File

@@ -1,3 +0,0 @@
exec: sh /etc/zinit/init/ntpd.sh
after:
- network

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/routing.sh
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: /etc/zinit/init/shm.sh
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/sshd-setup.sh
oneshot: true

View File

@@ -1,3 +0,0 @@
exec: /usr/sbin/sshd -D -e
after:
- sshd-setup

View File

@@ -1,6 +0,0 @@
exec: sh /etc/zinit/init/udev.sh
oneshot: true
after:
- depmod
- udevmon
- udevd

View File

@@ -1 +0,0 @@
exec: udevd

View File

@@ -1 +0,0 @@
exec: udevadm monitor

View File

@@ -1,33 +0,0 @@
# Main zinit configuration for Zero OS Alpine
# This replaces OpenRC completely
# Logging configuration
log_level: debug
log_file: /var/log/zinit/zinit.log
# Initialization phases
init:
# Phase 1: Critical system setup
- stage1-modules
- udevd
- depmod
# Phase 2: Extended hardware and networking
- stage2-modules
- network
- lo
# Phase 3: System services
- routing
- ntp
- haveged
# Phase 4: User services
- sshd-setup
- sshd
- getty
- console
- gettyconsole
# Service dependencies and ordering managed by individual service files
# All services are defined in the services/ subdirectory

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty -L 9600 console
restart: always

View File

@@ -1,2 +1,2 @@
exec: /sbin/agetty --noclear -a root tty1 linux
exec: /sbin/agetty --noclear -a root console linux
restart: always

View File

@@ -1,2 +0,0 @@
exec: /sbin/agetty -a root -L 115200 ttyS0 vt100
restart: always

View File

@@ -0,0 +1,2 @@
exec: /sbin/agetty --noclear -a root tty2 vt100
restart: always

View File

@@ -1,2 +1,5 @@
exec: haveged -w 1024 -d 32 -i 32 -v 1
oneshot: true
after:
- shm

View File

@@ -1,5 +1,5 @@
#!/bin/sh
# rfs mount firmware flist over /usr/lib/firmware (plain S3 route inside the .fl)
# rfs mount firmware flist over /lib/firmware (plain S3 route inside the .fl)
# Looks for firmware-latest.fl in known locations; can be overridden via FIRMWARE_FLIST env
set -eu
@@ -7,8 +7,39 @@ set -eu
log() { echo "[rfs-firmware] $*"; }
RFS_BIN="${RFS_BIN:-rfs}"
TARGET="/usr/lib/firmware"
BASE_URL="${FLISTS_BASE_URL:-https://zos.grid.tf/store/flists}"
TARGET="/lib/firmware"
# Accept both FLISTS_BASE_URL and FLIST_BASE_URL (alias); prefer FLISTS_BASE_URL
if [ -n "${FLISTS_BASE_URL:-}" ]; then
BASE_URL="${FLISTS_BASE_URL}"
elif [ -n "${FLIST_BASE_URL:-}" ]; then
BASE_URL="${FLIST_BASE_URL}"
else
BASE_URL="https://zos.grid.tf/store/flists"
fi
# HTTP readiness helper: wait until BASE_URL responds to HTTP(S)
wait_for_http() {
url="$1"
tries="${2:-60}"
delay="${3:-2}"
i=0
while [ "$i" -lt "$tries" ]; do
if command -v wget >/dev/null 2>&1; then
if wget -q --spider "$url"; then
return 0
fi
elif command -v busybox >/dev/null 2>&1; then
if busybox wget -q --spider "$url"; then
return 0
fi
fi
i=$((i+1))
log "waiting for $url (attempt $i/$tries)"
sleep "$delay"
done
return 1
}
# Allow override via env
if [ -n "${FIRMWARE_FLIST:-}" ] && [ -f "${FIRMWARE_FLIST}" ]; then
@@ -29,11 +60,18 @@ else
fi
if [ -z "${FL:-}" ]; then
# Try remote fetch as a fallback
# Try remote fetch as a fallback (but first ensure BASE_URL is reachable)
mkdir -p /etc/rfs
FL="/etc/rfs/firmware-latest.fl"
URL="${BASE_URL}/firmware-latest.fl"
log "firmware-latest.fl not found locally; fetching ${URL}"
URL="${BASE_URL%/}/firmware-latest.fl"
log "firmware-latest.fl not found locally; will fetch ${URL}"
# Probe BASE_URL root so DNS/HTTP are ready before fetch
ROOT_URL="${BASE_URL%/}/"
if ! wait_for_http "$ROOT_URL" 60 2; then
log "BASE_URL not reachable yet: ${ROOT_URL}; continuing to attempt fetch anyway"
fi
log "fetching ${URL}"
if command -v wget >/dev/null 2>&1; then
wget -q -O "${FL}" "${URL}" || true

View File

@@ -3,7 +3,17 @@
modprobe fuse
modprobe btrfs
modprobe tun
modprobe br_netfilter
modprobe
modprobe usbhid
modprobe hid_generic
modprobe hid
modprobe atkbd
modprobe libps2
modprobe i2c_smbus
modprobe serio
modprobe i8042i
echo never > /sys/kernel/mm/transparent_hugepage/enabled

View File

@@ -9,7 +9,37 @@ log() { echo "[rfs-modules] $*"; }
RFS_BIN="${RFS_BIN:-rfs}"
KVER="$(uname -r)"
TARGET="/lib/modules/${KVER}"
BASE_URL="${FLISTS_BASE_URL:-https://zos.grid.tf/store/flists}"
# Accept both FLISTS_BASE_URL and FLIST_BASE_URL (alias); prefer FLISTS_BASE_URL
if [ -n "${FLISTS_BASE_URL:-}" ]; then
BASE_URL="${FLISTS_BASE_URL}"
elif [ -n "${FLIST_BASE_URL:-}" ]; then
BASE_URL="${FLIST_BASE_URL}"
else
BASE_URL="https://zos.grid.tf/store/flists"
fi
# HTTP readiness helper: wait until BASE_URL responds to HTTP(S)
wait_for_http() {
url="$1"
tries="${2:-60}"
delay="${3:-2}"
i=0
while [ "$i" -lt "$tries" ]; do
if command -v wget >/dev/null 2>&1; then
if wget -q --spider "$url"; then
return 0
fi
elif command -v busybox >/dev/null 2>&1; then
if busybox wget -q --spider "$url"; then
return 0
fi
fi
i=$((i+1))
log "waiting for $url (attempt $i/$tries)"
sleep "$delay"
done
return 1
}
# Allow override via env
if [ -n "${MODULES_FLIST:-}" ] && [ -f "${MODULES_FLIST}" ]; then
@@ -33,8 +63,15 @@ if [ -z "${FL:-}" ]; then
# Try remote fetch as a fallback (modules-<uname -r>-Zero-OS.fl)
mkdir -p /etc/rfs
FL="/etc/rfs/modules-${KVER}.fl"
URL="${BASE_URL}/modules-${KVER}-Zero-OS.fl"
log "modules-${KVER}.fl not found locally; fetching ${URL}"
URL="${BASE_URL%/}/modules-${KVER}-Zero-OS.fl"
log "modules-${KVER}.fl not found locally; will fetch ${URL}"
# Probe BASE_URL root so DNS/HTTP are ready before fetch
ROOT_URL="${BASE_URL%/}/"
if ! wait_for_http "$ROOT_URL" 60 2; then
log "BASE_URL not reachable yet: ${ROOT_URL}; continuing to attempt fetch anyway"
fi
log "fetching ${URL}"
if command -v wget >/dev/null 2>&1; then
wget -q -O "${FL}" "${URL}" || true

View File

@@ -5,4 +5,4 @@ if ! getent group dhcpcd >/dev/null 2>&1; then addgroup -S dhcpcd 2>/dev/null ||
if ! getent passwd dhcpcd >/dev/null 2>&1; then adduser -S -D -s /sbin/nologin -G dhcpcd dhcpcd 2>/dev/null || true; fi
# Exec dhcpcd (will run as root if it cannot drop to dhcpcd user)
interfaces=$(ip -br l | awk '!/lo/&&!/my0/{print $1}')
exec dhcpcd -B $interfaces
exec dhcpcd -p -B $interfaces

View File

@@ -0,0 +1,32 @@
#!/bin/bash
# Simple script to create OVS database and start ovsdb-server
# Configuration
DATABASE=${DATABASE:-"/etc/openvswitch/conf.db"}
DBSCHEMA="/usr/share/openvswitch/vswitch.ovsschema"
DB_SOCKET=${DB_SOCKET:-"/var/run/openvswitch/db.sock"}
RUNDIR="/var/run/openvswitch"
# Create run directory
mkdir -p "$RUNDIR"
# Create database if it doesn't exist
if [ ! -e "$DATABASE" ]; then
echo "Creating database: $DATABASE"
ovsdb-tool create "$DATABASE" "$DBSCHEMA"
fi
# Check if database needs conversion
if [ "$(ovsdb-tool needs-conversion "$DATABASE" "$DBSCHEMA")" = "yes" ]; then
echo "Converting database: $DATABASE"
ovsdb-tool convert "$DATABASE" "$DBSCHEMA"
fi
# Start ovsdb-server
echo "Starting ovsdb-server..."
exec ovsdb-server \
--remote=punix:"$DB_SOCKET" \
--remote=db:Open_vSwitch,Open_vSwitch,manager_options \
--pidfile \
--detach \
"$DATABASE"

View File

@@ -1,6 +1,8 @@
exec: /usr/bin/mycelium --key-file /tmp/mycelium_priv_key.bin
exec: /usr/bin/mycelium --key-file /var/cache/etc/mycelium_priv_key.bin
--tun-name my0 --silent --peers tcp://188.40.132.242:9651 tcp://136.243.47.186:9651
tcp://185.69.166.7:9651 tcp://185.69.166.8:9651 tcp://65.21.231.58:9651 tcp://65.109.18.113:9651
tcp://209.159.146.190:9651 tcp://5.78.122.16:9651 tcp://5.223.43.251:9651 tcp://142.93.217.194:9651
after:
- network
- udev-rfs
- zosstorage

View File

@@ -1,5 +1,6 @@
exec: sh /etc/zinit/init/network.sh eth0
exec: sh /etc/zinit/init/network.sh
after:
- depmod
- udevd
- udev-trigger
test: ping -c1 www.google.com

View File

@@ -0,0 +1,4 @@
exec: /etc/zinit/init/ovsdb-server.sh
oneshot: true
after:
- zossstorage

View File

@@ -1,4 +1,4 @@
exec: sh /etc/zinit/init/modules.sh
restart: always
after:
- network
- rfs-firmware

View File

@@ -1,5 +1,2 @@
exec: /etc/zinit/init/shm.sh
oneshot: true
after:
- firmware
- modules

View File

@@ -1,3 +1,4 @@
exec: /usr/sbin/sshd -D -e
after:
- sshd-setup
- haveged

View File

@@ -1,5 +1,4 @@
exec: /bin/sh -c "udevadm control --reload; udevadm trigger --action=add --type=subsystems; udevadm trigger --action=add --type=devices; udevadm settle"
exec: sh -c "sleep 3 ; /etc/zinit/init/udev.sh"
oneshot: true
after:
- rfs-modules
- rfs-firmware

View File

@@ -0,0 +1,4 @@
exec: /usr/bin/zosstorage -l debug
oneshot: true
after:
- rfs-modules

View File

@@ -1,106 +0,0 @@
#!/bin/sh -x
# Alpine-based Zero-OS Init Script
# Maintains identical flow to original busybox version
echo ""
echo "============================================"
echo "== ZERO-OS ALPINE INITRAMFS =="
echo "============================================"
echo "[+] creating ram filesystem"
target="/mnt/root"
mkdir -p $target
mount -t proc proc /proc
mount -t sysfs sysfs /sys
mount -t tmpfs tmpfs /mnt/root -o size=1024M
mount -t devtmpfs devtmpfs /dev
echo "[+] building ram filesystem"
# Copy Alpine filesystem to tmpfs (same as original)
echo " copying /bin..."
cp -ar /bin $target
echo " copying /etc..."
cp -ar /etc $target
echo " copying /lib..."
cp -ar /lib* $target
echo " copying /usr..."
cp -ar /usr $target
echo " copying /root..."
cp -ar /root $target
echo " copying /sbin..."
cp -ar /sbin $target
echo " copying /tmp..."
cp -ar /tmp $target
echo " copying /var..."
cp -ar /var $target
echo " copying /run..."
cp -ar /run $target
# Create essential directories
mkdir -p $target/dev
mkdir -p $target/sys
mkdir -p $target/proc
mkdir -p $target/mnt
# Mount filesystems in tmpfs
mount -t proc proc $target/proc
mount -t sysfs sysfs $target/sys
mount -t devtmpfs devtmpfs $target/dev
# Mount devpts for terminals
mkdir -p $target/dev/pts
mount -t devpts devpts $target/dev/pts
echo "[+] setting environment"
export PATH
echo "[+] probing drivers"
# Use Alpine's udev instead of busybox udevadm
if [ -x /sbin/udevd ]; then
echo " starting udevd..."
udevd --daemon
echo " triggering device discovery..."
udevadm trigger --action=add --type=subsystems
udevadm trigger --action=add --type=devices
udevadm settle
echo " stopping udevd..."
kill $(pidof udevd) || true
else
echo " warning: udevd not found, skipping hardware detection"
fi
echo "[+] loading essential drivers"
# Load core drivers for storage and network
modprobe btrfs 2>/dev/null || true
modprobe fuse 2>/dev/null || true
modprobe overlay 2>/dev/null || true
# Load storage drivers
modprobe ahci 2>/dev/null || true
modprobe nvme 2>/dev/null || true
modprobe virtio_blk 2>/dev/null || true
modprobe virtio_scsi 2>/dev/null || true
# Load network drivers
modprobe virtio_net 2>/dev/null || true
modprobe e1000 2>/dev/null || true
modprobe e1000e 2>/dev/null || true
# Unmount init filesystems
umount /proc 2>/dev/null || true
umount /sys 2>/dev/null || true
echo "[+] checking for debug files"
if [ -e /init-debug ]; then
echo " executing debug script..."
sh /init-debug
fi
echo "[+] switching root"
echo " exec switch_root /mnt/root /sbin/zinit init"
exec switch_root /mnt/root /sbin/zinit -d init
##

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,59 +0,0 @@
# Essential kernel modules for Zero-OS Alpine initramfs
# This file contains a curated list of essential modules for network and storage functionality
# Comments are supported (lines starting with #)
# Network drivers - Intel
e1000
e1000e
igb
ixgbe
i40e
ice
# Network drivers - Realtek
r8169
8139too
8139cp
# Network drivers - Broadcom
bnx2
bnx2x
tg3
b44
# Network drivers - Atheros
atl1
atl1e
atl1c
alx
# VirtIO drivers
virtio_net
virtio_scsi
virtio_blk
virtio_pci
# Tunnel and container support
tun
overlay
# Storage subsystem (essential only)
scsi_mod
sd_mod
# Control Groups (cgroups v1 and v2) - essential for container management
cgroup_pids
cgroup_freezer
cgroup_perf_event
cgroup_device
cgroup_cpuset
cgroup_bpf
cgroup_debug
memcg
blkio_cgroup
cpu_cgroup
cpuacct
hugetlb_cgroup
net_cls_cgroup
net_prio_cgroup
devices_cgroup

View File

@@ -1,46 +0,0 @@
# MINIMAL Alpine packages for Zero-OS embedded initramfs
# Target: ~50MB total (not 700MB!)
# Core system (essential only)
alpine-baselayout
busybox
musl
# Module loading & hardware detection
eudev
eudev-hwids
eudev-libs
eudev-netifnames
kmod
# Console/terminal management
util-linux
# Essential networking (for Zero-OS connectivity)
iproute2
ethtool
# Filesystem support (minimal)
btrfs-progs
dosfstools
# Essential libraries only
zlib
# Network utilities (minimal)
dhcpcd
tcpdump
bmon
# Random number generation (for crypto/security)
haveged
# SSH access and terminal multiplexer
openssh-server
zellij
# Essential debugging and monitoring tools included
# NO development tools, NO curl/wget, NO python, NO redis
# NO massive linux-firmware package
# Other tools will be loaded from RFS after network connectivity

View File

@@ -1,10 +0,0 @@
# sources.conf - Components to download and build for initramfs
# Format: TYPE:NAME:URL:VERSION:BUILD_FUNCTION[:EXTRA_OPTIONS]
# Git repositories to clone and build
git:zinit:https://github.com/threefoldtech/zinit:master:build_zinit
git:mycelium:https://github.com/threefoldtech/mycelium:0.6.1:build_mycelium
git:rfs:https://github.com/threefoldtech/rfs:development:build_rfs
# Pre-built releases to download
release:corex:https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static:2.1.4:install_corex:rename=corex

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/cgroup.sh
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty -L 9600 console
restart: always

View File

@@ -1 +0,0 @@
exec: depmod -a

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty -L 115200 ttyS0 vt100
restart: always

View File

@@ -1,2 +0,0 @@
exec: /sbin/getty console linux
restart: always

View File

@@ -1,2 +0,0 @@
exec: haveged -w 1024 -d 32 -i 32 -v 1
oneshot: true

View File

@@ -1,6 +0,0 @@
#!/bin/bash
echo "start ash terminal"
while true; do
getty -l /bin/ash -n 19200 tty2
done

View File

@@ -1,10 +0,0 @@
set -x
mount -t tmpfs cgroup_root /sys/fs/cgroup
subsys="pids cpuset cpu cpuacct blkio memory devices freezer net_cls perf_event net_prio hugetlb"
for sys in $subsys; do
mkdir -p /sys/fs/cgroup/$sys
mount -t cgroup $sys -o $sys /sys/fs/cgroup/$sys/
done

View File

@@ -1,10 +0,0 @@
#!/bin/bash
modprobe fuse
modprobe btrfs
modprobe tun
modprobe br_netfilter
echo never > /sys/kernel/mm/transparent_hugepage/enabled
ulimit -n 524288

View File

@@ -1,10 +0,0 @@
#!/bin/sh
ntp_flags=$(grep -o 'ntp=.*' /proc/cmdline | sed 's/^ntp=//')
params=""
if [ -n "$ntp_flags" ]; then
params=$(echo "-p $ntp_flags" | sed s/,/' -p '/g)
fi
exec ntpd -n $params

View File

@@ -1,4 +0,0 @@
#!/bin/bash
echo "Enable ip forwarding"
echo 1 > /proc/sys/net/ipv4/ip_forward

View File

@@ -1,3 +0,0 @@
#!/bin/sh
mkdir /dev/shm
mount -t tmpfs shm /dev/shm

View File

@@ -1,15 +0,0 @@
#!/bin/ash
if [ -f /etc/ssh/ssh_host_rsa_key ]; then
# ensure existing file permissions
chown root:root /etc/ssh/ssh_host_*
chmod 600 /etc/ssh/ssh_host_*
exit 0
fi
echo "Setting up sshd"
mkdir -p /run/sshd
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
ssh-keygen -f /etc/ssh/ssh_host_ecdsa_key -N '' -t ecdsa -b 521
ssh-keygen -f /etc/ssh/ssh_host_ed25519_key -N '' -t ed25519

View File

@@ -1,4 +0,0 @@
#!/bin/sh
udevadm trigger --action=add
udevadm settle

View File

@@ -1,2 +0,0 @@
exec: ip l set lo up
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/modprobe.sh
oneshot: true

View File

@@ -1,6 +0,0 @@
exec: /usr/bin/mycelium --key-file /tmp/mycelium_priv_key.bin
--tun-name my0 --silent --peers tcp://188.40.132.242:9651 tcp://136.243.47.186:9651
tcp://185.69.166.7:9651 tcp://185.69.166.8:9651 tcp://65.21.231.58:9651 tcp://65.109.18.113:9651
tcp://209.159.146.190:9651 tcp://5.78.122.16:9651 tcp://5.223.43.251:9651 tcp://142.93.217.194:9651
after:
- network

View File

@@ -1,5 +0,0 @@
exec: dhcpcd eth0
after:
- depmod
- udevd
- udev-trigger

View File

@@ -1,3 +0,0 @@
exec: sh /etc/zinit/init/ntpd.sh
after:
- network

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/routing.sh
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: /etc/zinit/init/shm.sh
oneshot: true

View File

@@ -1,2 +0,0 @@
exec: sh /etc/zinit/init/sshd-setup.sh
oneshot: true

View File

@@ -1,3 +0,0 @@
exec: /usr/sbin/sshd -D -e
after:
- sshd-setup

View File

@@ -1,6 +0,0 @@
exec: sh /etc/zinit/init/udev.sh
oneshot: true
after:
- depmod
- udevmon
- udevd

View File

@@ -1 +0,0 @@
exec: udevd

View File

@@ -1 +0,0 @@
exec: udevadm monitor

View File

@@ -84,9 +84,9 @@ Initramfs Assembly Key Functions
- Install init script: [bash.initramfs_install_init_script()](scripts/lib/initramfs.sh:74)
- Installs [config/init](config/init) as /init in initramfs root.
- Components copy: [bash.initramfs_copy_components()](scripts/lib/initramfs.sh:101)
- Installs built components (zinit/rfs/mycelium/corex) into proper locations, strips/UPX where applicable.
- Installs built components (zinit/rfs/mycelium/corex/zosstorage) into proper locations, strips/UPX where applicable.
- Modules setup: [bash.initramfs_setup_modules()](scripts/lib/initramfs.sh:229)
- Reads [config/modules.conf](config/modules.conf), resolves deps via [bash.initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:318), generates stage1 list with firmware correlation.
- Reads [config/modules.conf](config/modules.conf), resolves deps via [bash.initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:318), generates stage1 list (firmware hints in modules.conf are ignored; firmware.conf is authoritative).
- Create module scripts: [bash.initramfs_create_module_scripts()](scripts/lib/initramfs.sh:427)
- Writes /etc/zinit/init/stage1-modules.sh and stage2-modules.sh for zinit to load modules.
- Binary size optimization: [bash.initramfs_strip_and_upx()](scripts/lib/initramfs.sh:491)
@@ -110,10 +110,19 @@ RFS Flists (modules/firmware)
- Packing scripts:
- Modules: [bash.pack-modules.sh](scripts/rfs/pack-modules.sh:1)
- Firmware: [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
- Firmware policy:
- For initramfs: [config/firmware.conf](config/firmware.conf) is the single source of truth for preinstalled firmware; modules.conf hints are ignored.
- For RFS: install all Alpine linux-firmware* packages into the build container and pack from /lib/firmware (full set for runtime).
- Integrated in stage_rfs_flists:
- Embeds /etc/rfs/modules-<FULL_KERNEL_VERSION>.fl
- Embeds /etc/rfs/firmware-latest.fl (or tagged by FIRMWARE_TAG)
- See [bash.main_build_process() — stage_rfs_flists](scripts/build.sh:298)
- Runtime mount/readiness:
- Firmware flist mounts over /lib/firmware (overmount hides any initramfs firmware).
- Modules flist mounts at /lib/modules/$(uname -r).
- Init scripts probe BASE_URL reachability (accepts FLISTS_BASE_URL or FLIST_BASE_URL) and wait for HTTP(S) before fetching:
- Firmware: [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- Modules: [sh.modules.sh](config/zinit/init/modules.sh:1)
Branding Behavior (Passwordless Root, motd/issue)
- Finalization hook: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
@@ -126,10 +135,12 @@ Branding Behavior (Passwordless Root, motd/issue)
- Branding also updates /etc/motd and /etc/issue to Zero-OS.
Console and getty
- Early keyboard and debug:
- [config/init](config/init) preloads input/HID and USB HCD modules (i8042, atkbd, usbhid, hid, hid_generic, evdev, xhci/ehci/ohci/uhci) so console input works before zinit/rfs.
- Kernel cmdline initdebug=true opens an early interactive shell; if /init-debug exists and is executable, it runs preferentially.
- Serial and console getty configs (zinit service YAML):
- [config/zinit/getty.yaml](config/zinit/getty.yaml)
- [config/zinit/gettyconsole.yaml](config/zinit/gettyconsole.yaml)
- [config/zinit/console.yaml](config/zinit/console.yaml)
- [config/zinit/getty-tty1.yaml](config/zinit/getty-tty1.yaml)
- [config/zinit/getty-console.yaml](config/zinit/getty-console.yaml)
- Optional ash login loop (not enabled unless referenced):
- [bash.ashloging.sh](config/zinit/init/ashloging.sh:1)
@@ -156,14 +167,26 @@ How to Verify Passwordless Root
Stage System and Incremental Rebuilds
- Stage markers stored in .build-stages/ (one file per stage).
- To minimally rebuild:
- Remove relevant .done files, e.g.:
- initramfs_create.done initramfs_test.done validation.done
- Rerun: DEBUG=1 ./scripts/build.sh --skip-tests
- Minimal rebuild helper (host or container):
- [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh) clears only: modules_setup, modules_copy, init_script, zinit_setup, validation, initramfs_create, initramfs_test (kernel_build only with --with-kernel; kernel_modules only with --refresh-container-mods).
- Flags:
- --with-kernel (also rebuild kernel; ensures cpio is recreated right before embedding)
- --refresh-container-mods (rebuild container /lib/modules for fresh containers)
- --verify-only (report changed files and stage status; no rebuild)
- Shows stage status before/after marker removal; no --rebuild-from is passed by default (relies on markers only).
- Manual minimal rebuild:
- Remove relevant .done files, e.g.: initramfs_create.done initramfs_test.done validation.done
- Rerun: DEBUG=1 ./scripts/build.sh
- Show status:
- ./scripts/build.sh --show-stages
- Test built kernel:
- ./runit.sh --hypervisor qemu
- ./runit.sh --hypervisor ch --disks 5 --reset
Key Decisions (current)
- Firmware selection for initramfs comes exclusively from [config/firmware.conf](config/firmware.conf); firmware hints in modules.conf are ignored to avoid duplication/mismatch.
- Runtime firmware flist overmounts /lib/firmware after network readiness; init scripts wait for FLISTS_BASE_URL/FLIST_BASE_URL HTTP reachability before fetching.
- Early keyboard and debug shell added to [config/init](config/init) as described above.
- Branding enforces passwordless root via passwd -d -R inside initramfs finalization, avoiding direct edits of passwd/shadow files.
- Directory paths normalized to absolute after loading config to avoid CWD-sensitive behavior.
- Container image contains shadow suite to ensure passwd/chpasswd availability; perl removed.
@@ -172,11 +195,11 @@ File Pointers (quick jump)
- Orchestrator: [scripts/build.sh](scripts/build.sh)
- Common and config loading: [bash.common.sh](scripts/lib/common.sh:1)
- Finalization hook: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Passwordless deletion: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:592)
- Validation entry: [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
- CPIO creation: [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688)
- Kernel embed config: [bash.kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:130)
- RFS packers: [bash.pack-modules.sh](scripts/rfs/pack-modules.sh:1), [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
- USB creator: [scripts/make-grub-usb.sh](scripts/make-grub-usb.sh)
Change Log
- 2025-09-09:
@@ -184,3 +207,80 @@ Change Log
- Normalize INSTALL_DIR/COMPONENTS_DIR/KERNEL_DIR/DIST_DIR to absolute paths post-config load.
- Add validation diagnostics prints (input/PWD/PROJECT_ROOT/INSTALL_DIR/resolved).
- Ensure shadow package in container for passwd/chpasswd; keep openssl and openssl-dev; remove perl earlier.
- 2025-10-10:
- Record zosstorage as a standard component installed during [bash.initramfs_copy_components()](scripts/lib/initramfs.sh:101) and sync component documentation references.
Updates 2025-10-01
- Function index regenerated: see [scripts/functionlist.md](scripts/functionlist.md) for an authoritative map of all functions with current line numbers. Use it alongside the quick links below to jump into code fast.
- Key jump-points (current lines):
- Finalization: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:568)
- CPIO creation: [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:691)
- Validation: [bash.initramfs_validate()](scripts/lib/initramfs.sh:820)
- Kernel embed config: [bash.kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:130)
- Stage orchestrator entry: [bash.main_build_process()](scripts/build.sh:214)
- Repo-wide index: [scripts/functionlist.md](scripts/functionlist.md)
Roadmap / TODO (tracked in tool todo list)
- Zosception (zinit service graph and ordering)
- Define additional services and ordering for nested/recursive orchestration.
- Likely integration points:
- Networking readiness before RFS: [config/zinit/network.yaml](config/zinit/network.yaml)
- Early udev coldplug: [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml)
- Post-RFS coldplug: [config/zinit/udev-rfs.yaml](config/zinit/udev-rfs.yaml)
- Ensure dependency edges are correct in the service DAG image (see docs/img_*.png).
- Add zosstorage to initramfs
- Source:
- If packaged: add to [config/packages.list](config/packages.list).
- If built from source: extend [bash.components_parse_sources_conf()](scripts/lib/components.sh:13) and add a build_* function; install via [bash.initramfs_copy_components()](scripts/lib/initramfs.sh:102).
- Zinit unit:
- Add YAML under [config/zinit/](config/zinit/) and hook into the network-ready path.
- Ordering:
- Start after "network" and before/with RFS mounts if it provides storage functionality used by rfs.
- RFS blob store backends (design + docs; http and s3 exist)
- Current S3 store URI construction: [bash.rfs_common_build_s3_store_uri()](scripts/rfs/common.sh:137)
- Flist manifest store patching: [bash.rfs_common_patch_flist_stores()](scripts/rfs/common.sh:385)
- Route URL patching: [bash.rfs_common_patch_flist_route_url()](scripts/rfs/common.sh:494)
- Packers entrypoints:
- [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh:1)
- [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
- Proposed additional backend: RESP/DB-style store
- Goal: Allow rfs to push/fetch content-addressed blobs via a RESP-compatible endpoint (e.g., Redis/KeyDB/Dragonfly-like), or a thin HTTP/RESP adapter.
- Draft URI scheme examples:
- resp://host:port/db?tls=0&amp;prefix=blobs
- resp+tls://host:port/db?prefix=blobs&amp;ca=/etc/ssl/certs/ca.pem
- resp+sentinel://sentinelHost:26379/mymaster?prefix=blobs
- Minimum operations:
- PUT blob: SETEX prefix/ab/cd/hash ttl file-bytes or HSET prefix/hash data file-bytes
- GET blob: GET or HGET
- HEAD/exists: EXISTS
- Optional: pipelined/mget for batch prefetch
- Client integration layers:
- Pack-time: extend rfs CLI store resolver (design doc first; scripts/rfs/common.sh can map scheme→uploader if CLI not ready).
- Manifest post-process: still supported; stores table may include multiple URIs (s3 + resp) for redundancy.
- Caching and retries:
- Local on-disk cache under dist/.rfs-cache keyed by hash with LRU GC.
- Exponential backoff on GET failures; fall back across stores in order.
- Auth:
- RESP: optional username/password in URI; TLS with cert pinning parameters.
- Keep secrets in config/rfs.conf or env; do not embed write creds in manifests (read-credential routes only).
- Deliverables:
- Design section in docs/rfs-flists.md (to be added)
- Config keys in config/rfs.conf.example for RESP endpoints
- Optional shim uploader script if CLI support lags.
- Documentation refresh tasks
- Cross-check this files clickable references against [scripts/functionlist.md](scripts/functionlist.md) after changes in lib files.
- Keep “Branding behavior” and “Absolute Path Normalization” pointers aligned with:
- [bash.common.sh normalization](scripts/lib/common.sh:244)
- [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:568)
Diagnostics-first reminder
- Use DEBUG=1 and stage markers for minimal rebuilds.
- Quick commands:
- Show stages: ./scripts/build.sh --show-stages
- Minimal rebuild after zinit/init edits: [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh)
- Validate archive: [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:691), then [bash.initramfs_test_archive()](scripts/lib/initramfs.sh:953)

View File

@@ -34,6 +34,8 @@ Repository map (jump-points)
- RFS flists tooling:
- Modules packer: [bash.pack-modules.sh](scripts/rfs/pack-modules.sh:1)
- Firmware packer: [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
- Boot media utility:
- GRUB USB creator: [scripts/make-grub-usb.sh](scripts/make-grub-usb.sh)
High-priority behaviors and policies
1) Branding passwordless root (shadow-aware)
@@ -53,18 +55,20 @@ High-priority behaviors and policies
- Pre-CPIO essential check includes “home”:
- [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:680)
4) Remote flist fallback (modules + firmware)
4) Remote flist fallback + readiness (modules + firmware)
- When local manifests are missing, fetch from zos.grid.tf and mount via rfs:
- Firmware fallback: [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- Default BASE_URL: https://zos.grid.tf/store/flists
- Fetch path: ${BASE_URL}/firmware-latest.fl to /etc/rfs/firmware-latest.fl
- Modules fallback: [sh.modules.sh](config/zinit/init/modules.sh:1)
- Fetch path: ${BASE_URL}/modules-$(uname -r)-Zero-OS.fl to /etc/rfs/modules-$(uname -r).fl
- Env overrides:
- FIRMWARE_FLIST, MODULES_FLIST: use local file if provided
- RFS_BIN: defaults to rfs
- FLISTS_BASE_URL: overrides base URL
- wget is available (initramfs includes it); scripts prefer wget, fallback to busybox wget if needed.
- Firmware: [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- BASE_URL from FLISTS_BASE_URL (or FLIST_BASE_URL alias), default https://zos.grid.tf/store/flists
- Probes BASE_URL for HTTP(S) readiness (wget --spider) before fetching firmware-latest.fl
- Fetch path: ${BASE_URL%/}/firmware-latest.fl to /etc/rfs/firmware-latest.fl
- Modules: [sh.modules.sh](config/zinit/init/modules.sh:1)
- BASE_URL from FLISTS_BASE_URL (or FLIST_BASE_URL alias)
- Probes BASE_URL for HTTP(S) readiness before fetching modules-$(uname -r)-Zero-OS.fl
- Env overrides:
- FIRMWARE_FLIST, MODULES_FLIST: use local file if provided
- RFS_BIN: defaults to rfs
- FLISTS_BASE_URL or FLIST_BASE_URL: override base URL
- wget is available (initramfs includes it); scripts prefer wget, fallback to busybox wget if needed.
5) Incremental build guards
- Kernel build now defaults INITRAMFS_ARCHIVE if unset (fix for unbound var on incremental runs):
@@ -72,6 +76,12 @@ High-priority behaviors and policies
- Initramfs test stage already guards INITRAMFS_ARCHIVE:
- [bash.stage_initramfs_test()](scripts/build.sh:385)
6) Early keyboard input and debug shell
- Early HID/input and USB HCD modules are preloaded before zinit to ensure console usability:
- [config.init](config/init:80)
- Debug hook: kernel cmdline initdebug=true runs /init-debug if present or drops to a shell:
- [config.init](config/init:115)
Flags and config
- Config file: [config/build.conf](config/build.conf)
- Branding flags:
@@ -85,6 +95,9 @@ Flags and config
- COMPONENTS_DIR="components"
- KERNEL_DIR="kernel"
- DIST_DIR="dist"
- Firmware policies:
- Initramfs: [config/firmware.conf](config/firmware.conf) is authoritative; modules.conf firmware hints are ignored.
- RFS: full Alpine firmware set is installed into container and packed from /lib/firmware (see [bash.pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)).
- Firmware flist naming tag:
- FIRMWARE_TAG (env > config > “latest”)
- Container image tools (podman rootless OK) defined by [Dockerfile](Dockerfile):
@@ -94,7 +107,7 @@ Diagnostics-first workflow (strict)
- For any failure, first collect specific logs:
- Enable DEBUG=1 for verbose logs.
- Re-run only the impacted stage if possible:
- Example: rm -f .build-stages/validation.done && DEBUG=1 ./scripts/build.sh --skip-tests
- Example: rm -f .build-stages/validation.done && DEBUG=1 ./scripts/build.sh
- Use existing diagnostics:
- Branding debug lines: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Validation debug lines (input, PWD, PROJECT_ROOT, INSTALL_DIR, resolved): [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
@@ -106,14 +119,17 @@ Common tasks and commands
- DEBUG=1 ./scripts/build.sh
- Minimal rebuild of last steps:
- rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh --skip-tests
- DEBUG=1 ./scripts/build.sh
- Validation only:
- rm -f .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh --skip-tests
- DEBUG=1 ./scripts/build.sh
- Show stage status:
- ./scripts/build.sh --show-stages
- Test built kernel:
- ./runit.sh --hypervisor qemu
- ./runit.sh --hypervisor ch --disks 5
Checklists
Checklists and helpers
A) Diagnose “passwordless root not working”
- Confirm branding flags are loaded:
@@ -130,17 +146,34 @@ B) Fix “Initramfs directory not found: initramfs (resolved: /workspace/kernel/
- Confirm validation prints “Validation debug:” with resolved absolute path:
- [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
C) INITRAMFS_ARCHIVE unbound during kernel build stage
C) Minimal rebuild after zinit/init/modules.conf changes
- Use the helper (works from host or container):
- scripts/rebuild-after-zinit.sh
- Defaults: initramfs-only; clears only modules_setup, modules_copy, init_script, zinit_setup, validation, initramfs_create, initramfs_test
- Flags:
- --with-kernel: also rebuild kernel; cpio is recreated immediately before embedding
- --refresh-container-mods: rebuild container /lib/modules for fresh containers
- --verify-only: report changed files and stage status; no rebuild
- Stage status is printed before/after marker removal; the helper avoids --rebuild-from by default to prevent running early stages.
- Manual fallback:
- rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh
D) INITRAMFS_ARCHIVE unbound during kernel build stage
- stage_kernel_build now defaults INITRAMFS_ARCHIVE if unset:
- [bash.stage_kernel_build()](scripts/build.sh:398)
- If error persists, ensure stage_initramfs_create ran or that defaulting logic sees dist/initramfs.cpio.xz.
D) Modules/firmware not found by rfs init scripts
- Confirm local manifests under /etc/rfs or remote fallback working:
E) Modules/firmware not found by rfs init scripts
- Confirm local manifests under /etc/rfs or remote fallback:
- Firmware: [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- Modules: [sh.modules.sh](config/zinit/init/modules.sh:1)
- For remote:
- Set FLISTS_BASE_URL or FLIST_BASE_URL; default is https://zos.grid.tf/store/flists
- Scripts probe BASE_URL readiness (wget --spider) before fetch
- Firmware target is /lib/firmware; modules target is /lib/modules/$(uname -r)
- Confirm uname -r matches remote naming “modules-$(uname -r)-Zero-OS.fl”
- Confirm wget present (it is in initramfs), or busybox fallback.
- Confirm wget present (or busybox wget)
Project conventions
- Edit policy:
@@ -157,8 +190,9 @@ Key files to keep in sync with behavior decisions
- Validation diagnostics: [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
- Archive creation (pre-CPIO checks): [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:688)
- Path normalization after config: [bash.common.sh](scripts/lib/common.sh:236)
- Modules/firmware remote fallback: [sh.modules.sh](config/zinit/init/modules.sh:1), [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- Modules/firmware remote fallback + readiness: [sh.modules.sh](config/zinit/init/modules.sh:1), [sh.firmware.sh](config/zinit/init/firmware.sh:1)
- Kernel stage defaulting for archive: [bash.stage_kernel_build()](scripts/build.sh:398)
- GRUB USB creator: [scripts/make-grub-usb.sh](scripts/make-grub-usb.sh)
- Operational notes: [docs/NOTES.md](docs/NOTES.md)
When in doubt

73
docs/TODO.md Normal file
View File

@@ -0,0 +1,73 @@
# Zero-OS Builder Persistent TODO
This canonical checklist is the single source of truth for ongoing work. It mirrors the live task tracker but is versioned in-repo for review and PRs. Jump-points reference exact functions and files for quick triage.
## High-level
- [x] Regenerate repository function index: [scripts/functionlist.md](../scripts/functionlist.md)
- [x] Refresh NOTES with jump-points and roadmap: [docs/NOTES.md](NOTES.md)
- [x] Extend RFS design with RESP/DB-style backend: [docs/rfs-flists.md](rfs-flists.md)
- [x] Make Rust components Git fetch non-destructive: [bash.components_download_git()](../scripts/lib/components.sh:72)
- [ ] Update zinit config for "zosception" workflow: [config/zinit/](../config/zinit/)
- [ ] Add zosstorage to the initramfs (package/build/install + zinit unit)
- [ ] Validate via minimal rebuild and boot tests; refine depmod/udev docs
- [ ] Commit and push documentation and configuration updates (post-zosception/zosstorage)
## Zosception (zinit service graph and ordering)
- [ ] Define service graph changes and ordering constraints
- Reference current triggers:
- Early coldplug: [config/zinit/udev-trigger.yaml](../config/zinit/udev-trigger.yaml)
- Network readiness: [config/zinit/network.yaml](../config/zinit/network.yaml)
- Post-mount coldplug: [config/zinit/udev-rfs.yaml](../config/zinit/udev-rfs.yaml)
- [ ] Add/update units under [config/zinit/](../config/zinit/) with proper after/needs/wants
- [ ] Validate with stage runner and logs
- Stage runner: [bash.stage_run()](../scripts/lib/stages.sh:99)
- Main flow: [bash.main_build_process()](../scripts/build.sh:214)
## ZOS Storage in initramfs
- [ ] Decide delivery mechanism:
- [ ] APK via [config/packages.list](../config/packages.list)
- [ ] Source build via [bash.components_parse_sources_conf()](../scripts/lib/components.sh:13) with a new build function
- [ ] Copy/install into initramfs
- Components copy: [bash.initramfs_copy_components()](../scripts/lib/initramfs.sh:102)
- Zinit setup: [bash.initramfs_setup_zinit()](../scripts/lib/initramfs.sh:13)
- [ ] Create zinit unit(s) for zosstorage startup and ordering
- Place after network and before RFS if it provides storage used by rfs
- [ ] Add smoke command to confirm presence in image (e.g., which/--version) during [bash.initramfs_validate()](../scripts/lib/initramfs.sh:820)
## RFS backends — implementation follow-up (beyond design)
- [ ] RESP uploader shim for packers (non-breaking)
- Packers entrypoints: [scripts/rfs/pack-modules.sh](../scripts/rfs/pack-modules.sh), [scripts/rfs/pack-firmware.sh](../scripts/rfs/pack-firmware.sh)
- Config loader: [bash.rfs_common_load_rfs_s3_config()](../scripts/rfs/common.sh:82) → extend to parse RESP_* (non-breaking)
- Store URI builder (S3 exists): [bash.rfs_common_build_s3_store_uri()](../scripts/rfs/common.sh:137)
- Manifest patching remains:
- Stores table: [bash.rfs_common_patch_flist_stores()](../scripts/rfs/common.sh:385)
- route.url: [bash.rfs_common_patch_flist_route_url()](../scripts/rfs/common.sh:494)
- [ ] Connectivity checks and retries for RESP path
- [ ] Local cache for pack-time (optional)
## Validation and boot tests
- [ ] Minimal rebuild after zinit/units/edit
- Helper: [scripts/rebuild-after-zinit.sh](../scripts/rebuild-after-zinit.sh)
- [ ] Validate contents before CPIO
- Create: [bash.initramfs_create_cpio()](../scripts/lib/initramfs.sh:691)
- Validate: [bash.initramfs_validate()](../scripts/lib/initramfs.sh:820)
- [ ] QEMU / cloud-hypervisor smoke tests
- Test runner: [runit.sh](../runit.sh)
- [ ] Kernel embed path and versioning sanity
- Embed config: [bash.kernel_modify_config_for_initramfs()](../scripts/lib/kernel.sh:130)
- Full version logic: [bash.kernel_get_full_version()](../scripts/lib/kernel.sh:14)
## Operational conventions (keep)
- [ ] Diagnostics-first changes; add logs before fixes
- [ ] Absolute path normalization respected
- Normalization: [scripts/lib/common.sh](../scripts/lib/common.sh:244)
Notes
- Keep this file in sync with the live tracker. Reference it in PR descriptions.
- Use the clickable references above for rapid navigation.

View File

@@ -31,7 +31,7 @@ Key property: depmods default operation opens many .ko files to read .modinfo
- Option C: Run depmod -a only if you changed the module set or path layout. Expect many small reads on first run.
3) Firmware implications
- No depmod impact, but udev coldplug will probe devices. Keep firmware files accessible via the firmware flist mount (e.g., /usr/lib/firmware).
- No depmod impact, but udev coldplug will probe devices. Keep firmware files accessible via the firmware flist mount (e.g., /lib/firmware).
- Since firmware loads on-demand by the kernel/driver, the lazy store will fetch only needed blobs.
4) Post-processing .fl to use a web endpoint (garage S3 private)

BIN
docs/img_1758452705037.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB

View File

@@ -13,7 +13,6 @@ Key sourced libraries:
- [initramfs.sh](scripts/lib/initramfs.sh)
- [stages.sh](scripts/lib/stages.sh)
- [docker.sh](scripts/lib/docker.sh)
- [testing.sh](scripts/lib/testing.sh)
Main stages executed (incremental via [stage_run()](scripts/lib/stages.sh:99)):
1) alpine_extract, alpine_configure, alpine_packages
@@ -124,7 +123,7 @@ Directory: scripts/rfs
## Future runtime units (deferred)
Will be added as new zinit units once flist generation is validated:
- Mount firmware flist read-only at /usr/lib/firmware
- Mount firmware flist read-only at /lib/firmware (overmount to hide initramfs firmware beneath)
- Mount modules flist read-only at /lib/modules/<FULL_VERSION>
- Run depmod -a <FULL_VERSION>
- Run udev coldplug sequence (reload, trigger add, settle)

View File

@@ -15,9 +15,10 @@ Scope of this change
Inputs
- Built kernel modules present in the dev-container (from kernel build stages):
- Preferred: /lib/modules/KERNEL_FULL_VERSION
- Firmware tree:
- Preferred: $PROJECT_ROOT/firmware (prepopulated tree from dev-container: “$root/firmware”)
- Fallback: initramfs/lib/firmware created by apk install of firmware packages
- Firmware source for RFS pack:
- Install all Alpine linux-firmware* packages into the build container and use /lib/firmware as the source (full set).
- Initramfs fallback (build-time):
- Selective firmware packages installed by [bash.alpine_install_firmware()](scripts/lib/alpine.sh:392) into initramfs/lib/firmware (kept inside the initramfs).
- Kernel version derivation (never use uname -r in container):
- Combine KERNEL_VERSION from [config/build.conf](config/build.conf) and LOCALVERSION from [config/kernel.config](config/kernel.config).
- This matches [kernel_get_full_version()](scripts/lib/kernel.sh:14).
@@ -71,7 +72,7 @@ Scripts to add (standalone)
Runtime (deferred to a follow-up)
- New zinit units to mount and coldplug:
- Mount firmware flist read-only at /usr/lib/firmware
- Mount firmware flist read-only at /lib/firmware
- Mount modules flist at /lib/modules/KERNEL_FULL_VERSION
- Run depmod -a KERNEL_FULL_VERSION
- udevadm control --reload; udevadm trigger --action=add; udevadm settle
@@ -164,3 +165,45 @@ Use the helper to inspect a manifest, optionally listing entries and testing a l
- scripts/rfs/verify-flist.sh -m dist/flists/firmware-latest.fl --tree
- Inspect + mount test to a temp dir:
- sudo scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl --mount
## Additional blob store backends (design)
This extends the existing S3/HTTP approach with a RESP/DB-style backend option for rfs blob storage. It is a design-only addition; CLI and scripts will be extended in a follow-up.
Scope
- Keep S3 flow intact via [scripts/rfs/common.sh](scripts/rfs/common.sh:137), [scripts/rfs/common.sh](scripts/rfs/common.sh:385), and [scripts/rfs/common.sh](scripts/rfs/common.sh:494).
- Introduce RESP URIs that can be encoded in config and, later, resolved by rfs or a thin uploader shim invoked by:
- [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh:1)
- [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
URI schemes (draft)
- resp://host:port/db?prefix=blobs
- resp+tls://host:port/db?prefix=blobs&amp;ca=/etc/ssl/certs/ca.pem
- resp+sentinel://sentinelHost:26379/mymaster?prefix=blobs
- Credentials may be provided via URI userinfo or config (recommended: config only).
Operations (minimal set)
- PUT blob: write content-addressed key (e.g., prefix/ab/cd/hash)
- GET blob: fetch by exact key
- Exists/HEAD: presence test by key
- Optional batching: pipelined MGET for prefetch
Config keys (see example additions in config/rfs.conf.example)
- RESP_ENDPOINT (host:port), RESP_DB (integer), RESP_PREFIX (path namespace)
- RESP_USERNAME/RESP_PASSWORD (optional), RESP_TLS=0/1 (+ RESP_CA if needed)
- RESP_SENTINEL and RESP_MASTER for sentinel deployments
Manifests and routes
- Keep S3 store in flist stores table (fallback) while enabling route.url patching to HTTP/S3 for read-only access:
- Patch stores table as today via [scripts/rfs/common.sh](scripts/rfs/common.sh:385)
- Patch route.url as today via [scripts/rfs/common.sh](scripts/rfs/common.sh:494)
- RESP may be used primarily for pack-time blob uploads or as an additional store the CLI can consume later.
Security
- Do not embed write credentials in manifests.
- Read-only credentials may be embedded in route.url if required, mirroring S3 pattern.
Next steps
- Implement RESP uploader shim called from pack scripts; keep the CLI S3 flow unchanged.
- Extend config loader in [scripts/rfs/common.sh](scripts/rfs/common.sh:82) to parse RESP_* variables.
- Add verification routines to sanity-check connectivity before pack.

View File

@@ -1 +0,0 @@
/bin/busybox

View File

@@ -1 +0,0 @@
/bin/busybox

View File

@@ -1 +0,0 @@
/bin/busybox

View File

@@ -1 +0,0 @@
/bin/busybox

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More