Compare commits

...

9 Commits

Author SHA1 Message Date
947d156921 Added youki build and fromatting of scripts
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-11-11 20:49:36 +01:00
721e26a855 build: remove testing.sh in favor of runit.sh; add claude.md reference
Replace inline boot testing with standalone runit.sh runner for clarity:
- Remove scripts/lib/testing.sh source and boot_tests stage from build.sh
- Remove --skip-tests option from build.sh and rebuild-after-zinit.sh
- Update all docs to reference runit.sh for QEMU/cloud-hypervisor testing
- Add comprehensive claude.md as AI assistant entry point with guidelines

Testing is now fully decoupled from build pipeline; use ./runit.sh for
QEMU/cloud-hypervisor validation after builds complete.
2025-11-04 13:47:24 +01:00
334821dacf Integrate zosstorage build path and runtime orchestration
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
Summary:

* add openssh-client to the builder image and mount host SSH keys into the dev container when available

* switch RFS to git builds, register the zosstorage source, and document the extra Rust component

* wire zosstorage into the build: add build_zosstorage(), ship the binary in the initramfs, and extend component validation

* refresh kernel configuration to 6.12.49 while dropping Xen guest selections and enabling counted-by support

* tighten runtime configs: use cached mycelium key path, add zosstorage zinit unit, bootstrap ovsdb-server, and enable openvswitch module

* adjust the network health check ping invocation and fix the RFS pack-tree --debug flag order

* update NOTES changelog, README component list, and introduce a runit helper for qemu/cloud-hypervisor testing

* add ovsdb init script wiring under config/zinit/init and ensure zosstorage is available before mycelium
2025-10-14 17:47:13 +02:00
cf05e0ca5b rfs: add pack-tree.sh to pack arbitrary directory to flist using config/rfs.conf; enable --debug on rfs pack for verbose diagnostics
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
2025-10-02 17:41:16 +02:00
883ffcf734 components: read config/sources.conf to determine components, versions, and build funcs; remove hardcoded list. verification: accept rfs built or prebuilt binary paths. 2025-10-02 17:13:35 +02:00
818f5037f4 docs(TODO): use relative links from docs/ to ../scripts and ../config so links work in GitHub and VS Code 2025-10-02 11:44:41 +02:00
d5e9bf2d9a docs: add persistent TODO.md checklist with clickable references to code and configs 2025-10-01 18:09:28 +02:00
10ba31acb4 docs: regenerate scripts/functionlist.md; refresh NOTES with jump-points and roadmap; extend rfs-flists with RESP backend design. config: add RESP placeholders to rfs.conf.example. components: keep previous non-destructive git clone logic. 2025-10-01 18:06:13 +02:00
6193d241ea components: reuse existing git tree in components_download_git; config: update packages.list 2025-10-01 17:47:51 +02:00
30 changed files with 2502 additions and 725 deletions

View File

@@ -7,6 +7,7 @@ RUN apk add --no-cache \
rustup \ rustup \
upx \ upx \
git \ git \
openssh-client \
wget \ wget \
curl \ curl \
tar \ tar \
@@ -19,6 +20,7 @@ RUN apk add --no-cache \
musl-utils \ musl-utils \
pkgconfig \ pkgconfig \
openssl openssl-dev \ openssl openssl-dev \
libseccomp libseccomp-dev \
perl \ perl \
shadow \ shadow \
bash \ bash \

121
README.md
View File

@@ -7,7 +7,7 @@ A comprehensive build system for creating custom Alpine Linux 3.22 x86_64 initra
- **Alpine Linux 3.22** miniroot as base system - **Alpine Linux 3.22** miniroot as base system
- **zinit** process manager (complete OpenRC replacement) - **zinit** process manager (complete OpenRC replacement)
- **Rootless containers** (Docker/Podman compatible) - **Rootless containers** (Docker/Podman compatible)
- **Rust components** with musl targeting (zinit, rfs, mycelium) - **Rust components** with musl targeting (zinit, rfs, mycelium, zosstorage)
- **Aggressive optimization** (strip + UPX compression) - **Aggressive optimization** (strip + UPX compression)
- **2-stage module loading** for hardware support - **2-stage module loading** for hardware support
- **GitHub Actions** compatible build pipeline - **GitHub Actions** compatible build pipeline
@@ -103,8 +103,7 @@ zosbuilder/
│ │ ├── alpine.sh # Alpine operations │ │ ├── alpine.sh # Alpine operations
│ │ ├── components.sh # source building │ │ ├── components.sh # source building
│ │ ├── initramfs.sh # assembly & optimization │ │ ├── initramfs.sh # assembly & optimization
│ │ ── kernel.sh # kernel building │ │ ── kernel.sh # kernel building
│ │ └── testing.sh # QEMU/cloud-hypervisor
│ ├── build.sh # main orchestrator │ ├── build.sh # main orchestrator
│ └── clean.sh # cleanup script │ └── clean.sh # cleanup script
├── initramfs/ # build output (generated) ├── initramfs/ # build output (generated)
@@ -222,6 +221,7 @@ Services are migrated from existing `configs/zinit/` directory with proper initi
- **zinit**: Standard cargo build - **zinit**: Standard cargo build
- **rfs**: Standard cargo build - **rfs**: Standard cargo build
- **mycelium**: Build in `myceliumd/` subdirectory - **mycelium**: Build in `myceliumd/` subdirectory
- **zosstorage**: Build from the storage orchestration component for Zero-OS
4. Install binaries to initramfs 4. Install binaries to initramfs
### Phase 4: System Configuration ### Phase 4: System Configuration
@@ -243,34 +243,40 @@ Services are migrated from existing `configs/zinit/` directory with proper initi
### Phase 6: Packaging ### Phase 6: Packaging
1. Create `initramfs.cpio.xz` with XZ compression 1. Create `initramfs.cpio.xz` with XZ compression
2. Build kernel with embedded initramfs 2. Build kernel with embedded initramfs
3. Generate `vmlinuz.efi` 3. Generate `vmlinuz.efi` (default kernel)
4. Generate versioned kernel: `vmlinuz-{VERSION}-{ZINIT_HASH}.efi`
5. Optionally upload versioned kernel to S3 (set `UPLOAD_KERNEL=true`)
## Testing ## Testing
### QEMU Testing ### QEMU Testing
```bash ```bash
# Boot test with QEMU # Boot test with QEMU (default)
./scripts/test.sh --qemu ./runit.sh
# With serial console # With custom parameters
./scripts/test.sh --qemu --serial ./runit.sh --hypervisor qemu --memory 2048 --disks 3
``` ```
### cloud-hypervisor Testing ### cloud-hypervisor Testing
```bash ```bash
# Boot test with cloud-hypervisor # Boot test with cloud-hypervisor
./scripts/test.sh --cloud-hypervisor ./runit.sh --hypervisor ch
# With disk reset
./runit.sh --hypervisor ch --reset --disks 5
``` ```
### Custom Testing ### Advanced Options
```bash ```bash
# Manual QEMU command # See all options
qemu-system-x86_64 \ ./runit.sh --help
-kernel dist/vmlinuz.efi \
-m 512M \ # Custom disk size and bridge
-nographic \ ./runit.sh --disk-size 20G --bridge zosbr --disks 4
-serial mon:stdio \
-append "console=ttyS0,115200 console=tty1 loglevel=7" # Environment variables
HYPERVISOR=ch NUM_DISKS=5 ./runit.sh
``` ```
## Size Optimization ## Size Optimization
@@ -320,7 +326,7 @@ jobs:
- name: Build - name: Build
run: ./scripts/build.sh run: ./scripts/build.sh
- name: Test - name: Test
run: ./scripts/test.sh --qemu run: ./runit.sh --hypervisor qemu
``` ```
## Advanced Usage ## Advanced Usage
@@ -353,6 +359,72 @@ function build_myapp() {
} }
``` ```
### S3 Uploads (Kernel & RFS Flists)
Automatically upload build artifacts to S3-compatible storage:
#### Configuration
Create `config/rfs.conf`:
```bash
S3_ENDPOINT="https://s3.example.com:9000"
S3_REGION="us-east-1"
S3_BUCKET="zos"
S3_PREFIX="flists/zosbuilder"
S3_ACCESS_KEY="YOUR_ACCESS_KEY"
S3_SECRET_KEY="YOUR_SECRET_KEY"
```
#### Upload Kernel
```bash
# Enable kernel upload
UPLOAD_KERNEL=true ./scripts/build.sh
# Custom kernel subpath (default: kernel)
KERNEL_SUBPATH=kernels UPLOAD_KERNEL=true ./scripts/build.sh
```
**Uploaded files:**
- `s3://{bucket}/{prefix}/kernel/vmlinuz-{VERSION}-{ZINIT_HASH}.efi` - Versioned kernel
- `s3://{bucket}/{prefix}/kernel/kernels.txt` - Text index (one kernel per line)
- `s3://{bucket}/{prefix}/kernel/kernels.json` - JSON index with metadata
**Index files:**
The build automatically generates and uploads index files listing all available kernels in the S3 bucket. This enables:
- Easy kernel selection in web UIs (dropdown menus)
- Programmatic access without S3 API listing
- Metadata like upload timestamp and kernel count (JSON format)
**JSON index format:**
```json
{
"kernels": [
"vmlinuz-6.12.44-Zero-OS-abc1234.efi",
"vmlinuz-6.12.44-Zero-OS-def5678.efi"
],
"updated": "2025-01-04T12:00:00Z",
"count": 2
}
```
#### Upload RFS Flists
```bash
# Enable flist uploads
UPLOAD_MANIFESTS=true ./scripts/build.sh
```
Uploaded as:
- `s3://{bucket}/{prefix}/manifests/modules-{VERSION}.fl`
- `s3://{bucket}/{prefix}/manifests/firmware-{TAG}.fl`
#### Requirements
- MinIO Client (`mcli` or `mc`) must be installed
- Valid S3 credentials in `config/rfs.conf`
### Container Builds ### Container Builds
Build in isolated container: Build in isolated container:
@@ -431,13 +503,16 @@ export DEBUG=1
```bash ```bash
# Setup development environment # Setup development environment
./scripts/setup-dev.sh ./scripts/dev-container.sh start
# Run tests # Run incremental build
./scripts/test.sh --all ./scripts/build.sh
# Check size impact # Test with QEMU
./scripts/analyze-size.sh --compare ./runit.sh --hypervisor qemu
# Test with cloud-hypervisor
./runit.sh --hypervisor ch
``` ```
## License ## License

523
claude.md Normal file
View File

@@ -0,0 +1,523 @@
# Claude Code Reference for Zero-OS Builder
This document provides essential context for Claude Code (or any AI assistant) working with this Zero-OS Alpine Initramfs Builder repository.
## Project Overview
**What is this?**
A sophisticated build system for creating custom Alpine Linux 3.22 x86_64 initramfs images with zinit process management, designed for Zero-OS deployment on ThreeFold Grid.
**Key Features:**
- Container-based reproducible builds (rootless podman/docker)
- Incremental staged build pipeline with completion markers
- zinit process manager (complete OpenRC replacement)
- RFS (Remote File System) for lazy-loading modules/firmware from S3
- Rust components built with musl static linking
- Aggressive size optimization (strip + UPX)
- Embedded initramfs in kernel (single vmlinuz.efi output)
## Repository Structure
```
zosbuilder/
├── config/ # All configuration files
│ ├── build.conf # Build settings (versions, paths, flags)
│ ├── packages.list # Alpine packages to install
│ ├── sources.conf # ThreeFold components to build
│ ├── modules.conf # 2-stage kernel module loading
│ ├── firmware.conf # Firmware to include in initramfs
│ ├── kernel.config # Linux kernel configuration
│ ├── init # /init script for initramfs
│ └── zinit/ # zinit service definitions (YAML)
├── scripts/
│ ├── build.sh # Main orchestrator (DO NOT EDIT LIGHTLY)
│ ├── clean.sh # Clean all artifacts
│ ├── dev-container.sh # Persistent dev container manager
│ ├── rebuild-after-zinit.sh # Quick rebuild helper
│ ├── lib/ # Modular build libraries
│ │ ├── common.sh # Logging, path normalization, utilities
│ │ ├── stages.sh # Incremental stage tracking
│ │ ├── docker.sh # Container lifecycle
│ │ ├── alpine.sh # Alpine extraction, packages, cleanup
│ │ ├── components.sh # Build Rust components from sources.conf
│ │ ├── initramfs.sh # Assembly, optimization, CPIO creation
│ │ └── kernel.sh # Kernel download, config, build, embed
│ └── rfs/ # RFS flist generation scripts
│ ├── common.sh # S3 config, version computation
│ ├── pack-modules.sh # Create modules flist
│ ├── pack-firmware.sh # Create firmware flist
│ └── verify-flist.sh # Inspect/test flists
├── docs/ # Detailed documentation
│ ├── NOTES.md # Operational knowledge & troubleshooting
│ ├── PROMPT.md # Agent guidance (strict debugger mode)
│ ├── TODO.md # Persistent checklist with code refs
│ ├── AGENTS.md # Quick reference for agents
│ ├── rfs-flists.md # RFS design and runtime flow
│ ├── review-rfs-integration.md # Integration points
│ └── depmod-behavior.md # Module dependency details
├── runit.sh # Test runner (QEMU/cloud-hypervisor)
├── initramfs/ # Generated initramfs tree
├── components/ # Generated component sources
├── kernel/ # Generated kernel source
├── dist/ # Final outputs
│ ├── vmlinuz.efi # Kernel with embedded initramfs
│ └── initramfs.cpio.xz # Standalone initramfs archive
└── .build-stages/ # Incremental build markers (*.done files)
```
## Core Concepts
### 1. Incremental Staged Builds
**How it works:**
- Each stage creates a `.build-stages/<stage_name>.done` marker on success
- Subsequent builds skip completed stages unless forced
- Use `./scripts/build.sh --show-stages` to see status
- Use `./scripts/build.sh --rebuild-from=<stage>` to restart from a specific stage
- Manually remove `.done` files to re-run specific stages
**Build stages (in order):**
```
alpine_extract → alpine_configure → alpine_packages → alpine_firmware
→ components_build → components_verify → kernel_modules
→ init_script → components_copy → zinit_setup
→ modules_setup → modules_copy → cleanup → rfs_flists
→ validation → initramfs_create → initramfs_test → kernel_build
```
**Key insight:** The build ALWAYS runs inside a container. Host invocations auto-spawn containers.
### 2. Container-First Architecture
**Why containers?**
- Reproducible toolchain (Alpine 3.22 base with exact dependencies)
- Rootless execution (no privileged access needed)
- Isolation from host environment
- GitHub Actions compatible
**Container modes:**
- **Transient:** `./scripts/build.sh` spawns, builds, exits
- **Persistent:** `./scripts/dev-container.sh start/shell/build`
**Important:** Directory paths are normalized to absolute PROJECT_ROOT to avoid CWD issues when stages change directories (especially kernel builds).
### 3. Component Build System
**sources.conf format:**
```
TYPE:NAME:URL:VERSION:BUILD_FUNCTION[:EXTRA]
```
**Example:**
```bash
git:zinit:https://github.com/threefoldtech/zinit:master:build_zinit
git:rfs:https://github.com/threefoldtech/rfs:development:build_rfs
git:mycelium:https://github.com/threefoldtech/mycelium:0.6.1:build_mycelium
release:corex:https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static:2.1.4:install_corex:rename=corex
```
**Build functions** are defined in `scripts/lib/components.sh` and handle:
- Rust builds with `x86_64-unknown-linux-musl` target
- Static linking via `RUSTFLAGS="-C target-feature=+crt-static"`
- Special cases (e.g., mycelium builds in `myceliumd/` subdirectory)
### 4. RFS Flists (Remote File System)
**Purpose:** Lazy-load kernel modules and firmware from S3 at runtime
**Flow:**
1. Build stage creates flists: `modules-<KERNEL_VERSION>.fl` and `firmware-<TAG>.fl`
2. Flists are SQLite databases containing:
- Content-addressed blob references
- S3 store URIs (patched for read-only access)
- Directory tree metadata
3. Flists embedded in initramfs at `/etc/rfs/`
4. Runtime: zinit units mount flists over `/lib/modules/` and `/lib/firmware/`
5. Dual udev coldplug: early (before RFS) for networking, post-RFS for new hardware
**Key files:**
- `scripts/rfs/pack-modules.sh` - Creates modules flist from container `/lib/modules/`
- `scripts/rfs/pack-firmware.sh` - Creates firmware flist from Alpine packages
- `config/zinit/init/modules.sh` - Runtime mount script
- `config/zinit/init/firmware.sh` - Runtime mount script
### 5. zinit Service Management
**No OpenRC:** This system uses zinit exclusively for process management.
**Service graph:**
```
/init → zinit → [stage1-modules, udevd, depmod]
→ udev-trigger (early coldplug)
→ network
→ rfs-modules + rfs-firmware (mount flists)
→ udev-rfs (post-RFS coldplug)
→ services
```
**Service definitions:** YAML files in `config/zinit/` with `after:`, `needs:`, `wants:` dependencies
### 6. Kernel Versioning and S3 Upload
**Versioned Kernel Output:**
- Standard kernel: `dist/vmlinuz.efi` (for compatibility)
- Versioned kernel: `dist/vmlinuz-{VERSION}-{ZINIT_HASH}.efi`
- Example: `vmlinuz-6.12.44-Zero-OS-a1b2c3d.efi`
**Version components:**
- `{VERSION}`: Full kernel version from `KERNEL_VERSION` + `CONFIG_LOCALVERSION`
- `{ZINIT_HASH}`: Short git commit hash from `components/zinit/.git`
**S3 Upload (optional):**
- Controlled by `UPLOAD_KERNEL=true` environment variable
- Uses MinIO client (`mcli` or `mc`) to upload to S3-compatible storage
- Uploads versioned kernel to: `s3://{bucket}/{prefix}/kernel/{versioned_filename}`
**Kernel Index Generation:**
After uploading, automatically generates and uploads index files:
- `kernels.txt` - Plain text, one kernel per line, sorted reverse chronologically
- `kernels.json` - JSON format with metadata (timestamp, count)
**Why index files?**
- S3 web interfaces often don't support directory listings
- Enables dropdown menus in web UIs without S3 API access
- Provides kernel discovery for deployment tools
**JSON index structure:**
```json
{
"kernels": ["vmlinuz-6.12.44-Zero-OS-abc1234.efi", ...],
"updated": "2025-01-04T12:00:00Z",
"count": 2
}
```
**Key functions:**
- `get_git_commit_hash()` in `scripts/lib/common.sh` - Extracts git hash
- `kernel_build_with_initramfs()` in `scripts/lib/kernel.sh` - Creates versioned kernel
- `kernel_upload_to_s3()` in `scripts/lib/kernel.sh` - Uploads to S3
- `kernel_generate_index()` in `scripts/lib/kernel.sh` - Generates and uploads index
## Critical Conventions
### Path Normalization
**Problem:** Stages can change CWD (kernel build uses `/workspace/kernel/current`)
**Solution:** All paths normalized to absolute at startup in `scripts/lib/common.sh:244`
**Variables affected:**
- `INSTALL_DIR` (initramfs/)
- `COMPONENTS_DIR` (components/)
- `KERNEL_DIR` (kernel/)
- `DIST_DIR` (dist/)
**Never use relative paths** when calling functions that might be in different CWDs.
### Branding and Security
**Passwordless root enforcement:**
- Applied in `scripts/lib/initramfs.sh:575` via `passwd -d -R "${initramfs_dir}" root`
- Creates `root::` in `/etc/shadow` (empty password field)
- Controlled by `ZEROOS_BRANDING` and `ZEROOS_PASSWORDLESS_ROOT` flags
**Never edit /etc/shadow manually** - always use `passwd` or `chpasswd` with chroot.
### Module Loading Strategy
**2-stage approach:**
- **Stage 1:** Critical boot modules (virtio, e1000, scsi) - loaded by zinit early
- **Stage 2:** Extended hardware (igb, ixgbe, i40e) - loaded after network
**Config:** `config/modules.conf` with `stage1:` and `stage2:` prefixes
**Dependency resolution:**
- Uses `modinfo` to build dependency tree
- Resolves from container `/lib/modules/<FULL_VERSION>/`
- Must run after `kernel_modules` stage
### Firmware Policy
**For initramfs:** `config/firmware.conf` is the SINGLE source of truth
- Any firmware hints in `modules.conf` are IGNORED
- Prevents duplication/version mismatches
**For RFS:** Full Alpine `linux-firmware*` packages installed in container
- Packed from container `/lib/firmware/`
- Overmounts at runtime for extended hardware
## Common Workflows
### Full Build from Scratch
```bash
# Clean everything and rebuild
./scripts/build.sh --clean
# Or just rebuild all stages
./scripts/build.sh --force-rebuild
```
### Quick Iteration After Config Changes
```bash
# After editing zinit configs, init script, or modules.conf
./scripts/rebuild-after-zinit.sh
# With kernel rebuild
./scripts/rebuild-after-zinit.sh --with-kernel
# Dry-run to see what changed
./scripts/rebuild-after-zinit.sh --verify-only
```
### Minimal Manual Rebuild
```bash
# Remove specific stages
rm -f .build-stages/initramfs_create.done
rm -f .build-stages/validation.done
# Rebuild only those stages
DEBUG=1 ./scripts/build.sh
```
### Testing the Built Kernel
```bash
# QEMU (default)
./runit.sh
# cloud-hypervisor with 5 disks
./runit.sh --hypervisor ch --disks 5 --reset
# Custom memory and bridge
./runit.sh --memory 4096 --bridge zosbr
```
### Persistent Dev Container
```bash
# Start persistent container
./scripts/dev-container.sh start
# Enter shell
./scripts/dev-container.sh shell
# Run build inside
./scripts/dev-container.sh build
# Stop container
./scripts/dev-container.sh stop
```
## Debugging Guidelines
### Diagnostics-First Approach
**ALWAYS add diagnostics before fixes:**
1. Enable `DEBUG=1` for verbose safe_execute logs
2. Add strategic `log_debug` statements
3. Confirm hypothesis in logs
4. Then apply minimal fix
**Example:**
```bash
# Bad: Guess and fix
Edit file to fix suspected issue
# Good: Diagnose first
1. Add log_debug "Variable X=${X}, resolved=${resolved_path}"
2. DEBUG=1 ./scripts/build.sh
3. Confirm in output
4. Apply fix with evidence
```
### Key Diagnostic Functions
- `scripts/lib/common.sh`: `log_info`, `log_warn`, `log_error`, `log_debug`
- `scripts/lib/initramfs.sh:820`: Validation debug prints (input, PWD, PROJECT_ROOT, resolved paths)
- `scripts/lib/initramfs.sh:691`: Pre-CPIO sanity checks with file listings
### Common Issues and Solutions
**"Initramfs directory not found"**
- **Cause:** INSTALL_DIR interpreted as relative in different CWD
- **Fix:** Already patched - paths normalized at startup
- **Check:** Look for "Validation debug:" logs showing resolved paths
**"INITRAMFS_ARCHIVE unbound"**
- **Cause:** Incremental build skipped initramfs_create stage
- **Fix:** Already patched - stages default INITRAMFS_ARCHIVE if unset
- **Check:** `scripts/build.sh:401` logs "defaulting INITRAMFS_ARCHIVE"
**"Module dependency resolution fails"**
- **Cause:** Container `/lib/modules/<FULL_VERSION>` missing or stale
- **Fix:** `./scripts/rebuild-after-zinit.sh --refresh-container-mods`
- **Check:** Ensure `kernel_modules` stage completed successfully
**"Passwordless root not working"**
- **Cause:** Branding disabled or shadow file not updated
- **Fix:** Check ZEROOS_BRANDING=true in logs, verify /etc/shadow has `root::`
- **Verify:** Extract initramfs and `grep '^root:' etc/shadow`
## Important Files Quick Reference
### Must-Read Before Editing
- `scripts/build.sh` - Orchestrator with precise stage order
- `scripts/lib/common.sh` - Path normalization, logging, utilities
- `scripts/lib/stages.sh` - Stage tracking logic
- `config/build.conf` - Version pins, directory settings, flags
### Safe to Edit
- `config/zinit/*.yaml` - Service definitions
- `config/zinit/init/*.sh` - Runtime initialization scripts
- `config/modules.conf` - Module lists (stage1/stage2)
- `config/firmware.conf` - Initramfs firmware selection
- `config/packages.list` - Alpine packages
### Generated (Never Edit)
- `initramfs/` - Assembled initramfs tree
- `components/` - Downloaded component sources
- `kernel/` - Kernel source tree
- `dist/` - Build outputs
- `.build-stages/` - Completion markers
## Testing Architecture
**No built-in tests during build** - Tests run separately via `runit.sh`
**Why?**
- Build is for assembly, not validation
- Tests require hypervisor (QEMU/cloud-hypervisor)
- Separation allows faster iteration
**runit.sh features:**
- Multi-disk support (qcow2 for QEMU, raw for cloud-hypervisor)
- Network bridge/TAP configuration
- Persistent volumes (reset with `--reset`)
- Serial console logging
## Quick Command Reference
```bash
# Build
./scripts/build.sh # Incremental build
./scripts/build.sh --clean # Clean build
./scripts/build.sh --show-stages # Show completion status
./scripts/build.sh --rebuild-from=zinit_setup # Rebuild from stage
DEBUG=1 ./scripts/build.sh # Verbose output
# Rebuild helpers
./scripts/rebuild-after-zinit.sh # After zinit/init/modules changes
./scripts/rebuild-after-zinit.sh --with-kernel # Also rebuild kernel
./scripts/rebuild-after-zinit.sh --verify-only # Dry-run
# Testing
./runit.sh # QEMU test
./runit.sh --hypervisor ch # cloud-hypervisor test
./runit.sh --help # All options
# Dev container
./scripts/dev-container.sh start # Start persistent container
./scripts/dev-container.sh shell # Enter shell
./scripts/dev-container.sh build # Build inside container
./scripts/dev-container.sh stop # Stop container
# Cleanup
./scripts/clean.sh # Remove all generated files
rm -rf .build-stages/ # Reset stage markers
```
## Environment Variables
**Build control:**
- `DEBUG=1` - Enable verbose logging
- `FORCE_REBUILD=true` - Force rebuild all stages
- `REBUILD_FROM_STAGE=<name>` - Rebuild from specific stage
**Version overrides:**
- `ALPINE_VERSION=3.22` - Alpine Linux version
- `KERNEL_VERSION=6.12.44` - Linux kernel version
- `RUST_TARGET=x86_64-unknown-linux-musl` - Rust compilation target
**Firmware tagging:**
- `FIRMWARE_TAG=20250908` - Firmware flist version tag
**S3 upload control:**
- `UPLOAD_KERNEL=true` - Upload versioned kernel to S3 (default: false)
- `UPLOAD_MANIFESTS=true` - Upload RFS flists to S3 (default: false)
- `KERNEL_SUBPATH=kernel` - S3 subpath for kernel uploads (default: kernel)
**S3 configuration:**
- See `config/rfs.conf` for S3 endpoint, credentials, paths
- Used by both RFS flist uploads and kernel uploads
## Documentation Hierarchy
**Start here:**
1. `README.md` - User-facing guide with features and setup
2. This file (`claude.md`) - AI assistant context
**For development:**
3. `docs/NOTES.md` - Operational knowledge, troubleshooting
4. `docs/AGENTS.md` - Quick agent reference
5. `docs/TODO.md` - Current work checklist with code links
**For deep dives:**
6. `docs/PROMPT.md` - Strict debugger agent mode (diagnostics-first)
7. `docs/rfs-flists.md` - RFS design and implementation
8. `docs/review-rfs-integration.md` - Integration points analysis
9. `docs/depmod-behavior.md` - Module dependency deep dive
**Historical:**
10. `IMPLEMENTATION_PLAN.md` - Original design document
11. `GITHUB_ACTIONS.md` - CI/CD setup guide
## Project Philosophy
1. **Reproducibility:** Container-based builds ensure identical results
2. **Incrementality:** Stage markers minimize rebuild time
3. **Diagnostics-first:** Log before fixing, validate assumptions
4. **Minimal intervention:** Alpine + zinit only, no systemd/OpenRC
5. **Size-optimized:** Aggressive cleanup, strip, UPX compression
6. **Remote-ready:** RFS enables lazy-loading for extended hardware support
## Commit Message Guidelines
**DO NOT add Claude Code or AI assistant references to commit messages.**
Keep commits clean and professional:
- Focus on what changed and why
- Use conventional commit prefixes: `build:`, `docs:`, `fix:`, `feat:`, `refactor:`
- Be concise but descriptive
- No emoji unless project convention
- No "Generated with Claude Code" or "Co-Authored-By: Claude" footers
**Good example:**
```
build: remove testing.sh in favor of runit.sh
Replace inline boot testing with standalone runit.sh runner.
Tests now run separately from build pipeline for faster iteration.
```
**Bad example:**
```
build: remove testing.sh 🤖
Made some changes to testing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
```
## Keywords for Quick Search
- **Build fails:** Check `DEBUG=1` logs, stage completion markers, container state
- **Module issues:** `kernel_modules` stage, `CONTAINER_MODULES_PATH`, depmod logs
- **Firmware missing:** `config/firmware.conf` for initramfs, RFS flist for runtime
- **zinit problems:** Service YAML syntax, dependency order, init script errors
- **Path errors:** Absolute path normalization in `common.sh:244`
- **Size too large:** Check cleanup stage, strip/UPX execution, package list
- **Container issues:** Rootless setup, subuid/subgid, podman vs docker
- **RFS mount fails:** S3 credentials, network readiness, flist manifest paths
- **Kernel upload:** `UPLOAD_KERNEL=true`, requires `config/rfs.conf`, MinIO client (`mcli`/`mc`)
- **Kernel index:** Auto-generated `kernels.txt`/`kernels.json` for dropdown UIs, updated on upload
---
**Last updated:** 2025-01-04
**Maintainer notes:** This file is the entry point for AI assistants. Keep it updated when architecture changes. Cross-reference with `docs/NOTES.md` for operational details.

View File

@@ -61,6 +61,7 @@ ENABLE_STRIP="true"
ENABLE_UPX="true" ENABLE_UPX="true"
ENABLE_AGGRESSIVE_CLEANUP="true" ENABLE_AGGRESSIVE_CLEANUP="true"
ENABLE_2STAGE_MODULES="true" ENABLE_2STAGE_MODULES="true"
UPLOAD_KERNEL=true
# Debug and development # Debug and development
DEBUG_DEFAULT="0" DEBUG_DEFAULT="0"

View File

@@ -1,18 +1,18 @@
# #
# Automatically generated file; DO NOT EDIT. # Automatically generated file; DO NOT EDIT.
# Linux/x86 6.12.44 Kernel Configuration # Linux/x86 6.12.49 Kernel Configuration
# #
CONFIG_CC_VERSION_TEXT="gcc (Alpine 14.2.0) 14.2.0" CONFIG_CC_VERSION_TEXT="gcc (GCC) 15.2.1 20250813"
CONFIG_CC_IS_GCC=y CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=140200 CONFIG_GCC_VERSION=150201
CONFIG_CLANG_VERSION=0 CONFIG_CLANG_VERSION=0
CONFIG_AS_IS_GNU=y CONFIG_AS_IS_GNU=y
CONFIG_AS_VERSION=24400 CONFIG_AS_VERSION=24500
CONFIG_LD_IS_BFD=y CONFIG_LD_IS_BFD=y
CONFIG_LD_VERSION=24400 CONFIG_LD_VERSION=24500
CONFIG_LLD_VERSION=0 CONFIG_LLD_VERSION=0
CONFIG_RUSTC_VERSION=109000 CONFIG_RUSTC_VERSION=108900
CONFIG_RUSTC_LLVM_VERSION=200108 CONFIG_RUSTC_LLVM_VERSION=200107
CONFIG_CC_CAN_LINK=y CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y CONFIG_CC_HAS_ASM_GOTO_OUTPUT=y
@@ -20,6 +20,7 @@ CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT=y
CONFIG_TOOLS_SUPPORT_RELR=y CONFIG_TOOLS_SUPPORT_RELR=y
CONFIG_CC_HAS_ASM_INLINE=y CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y
CONFIG_CC_HAS_COUNTED_BY=y
CONFIG_LD_CAN_USE_KEEP_IN_OVERLAY=y CONFIG_LD_CAN_USE_KEEP_IN_OVERLAY=y
CONFIG_RUSTC_HAS_UNNECESSARY_TRANSMUTES=y CONFIG_RUSTC_HAS_UNNECESSARY_TRANSMUTES=y
CONFIG_PAHOLE_VERSION=130 CONFIG_PAHOLE_VERSION=130
@@ -368,23 +369,10 @@ CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_HYPERVISOR_GUEST=y CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_XXL=y
# CONFIG_PARAVIRT_DEBUG is not set # CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_X86_HV_CALLBACK_VECTOR=y CONFIG_X86_HV_CALLBACK_VECTOR=y
CONFIG_XEN=y # CONFIG_XEN is not set
CONFIG_XEN_PV=y
CONFIG_XEN_512GB=y
CONFIG_XEN_PV_SMP=y
CONFIG_XEN_PV_DOM0=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_PVHVM_SMP=y
CONFIG_XEN_PVHVM_GUEST=y
CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_XEN_DEBUG_FS is not set
CONFIG_XEN_PVH=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PV_MSR_SAFE=y
CONFIG_KVM_GUEST=y CONFIG_KVM_GUEST=y
CONFIG_ARCH_CPUIDLE_HALTPOLL=y CONFIG_ARCH_CPUIDLE_HALTPOLL=y
CONFIG_PVH=y CONFIG_PVH=y
@@ -528,7 +516,7 @@ CONFIG_HOTPLUG_CPU=y
CONFIG_LEGACY_VSYSCALL_XONLY=y CONFIG_LEGACY_VSYSCALL_XONLY=y
# CONFIG_LEGACY_VSYSCALL_NONE is not set # CONFIG_LEGACY_VSYSCALL_NONE is not set
CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="intel_iommu=on kvm-intel.nested=1 console=ttyS1,115200n8 console=tty1 consoleblank=0 earlyprintk=serial,ttyS1,115200n8 loglevel=7" CONFIG_CMDLINE="consoleblank=0 earlyprintk=serial loglevel=7"
# CONFIG_CMDLINE_OVERRIDE is not set # CONFIG_CMDLINE_OVERRIDE is not set
CONFIG_MODIFY_LDT_SYSCALL=y CONFIG_MODIFY_LDT_SYSCALL=y
# CONFIG_STRICT_SIGALTSTACK_SIZE is not set # CONFIG_STRICT_SIGALTSTACK_SIZE is not set
@@ -573,6 +561,7 @@ CONFIG_MITIGATION_SRBDS=y
CONFIG_MITIGATION_SSB=y CONFIG_MITIGATION_SSB=y
CONFIG_MITIGATION_ITS=y CONFIG_MITIGATION_ITS=y
CONFIG_MITIGATION_TSA=y CONFIG_MITIGATION_TSA=y
CONFIG_MITIGATION_VMSCAPE=y
CONFIG_ARCH_HAS_ADD_PAGES=y CONFIG_ARCH_HAS_ADD_PAGES=y
# #
@@ -737,7 +726,6 @@ CONFIG_INTEL_IDLE=y
# #
CONFIG_PCI_DIRECT=y CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_MMCONF_FAM10H=y CONFIG_MMCONF_FAM10H=y
# CONFIG_PCI_CNB20LE_QUIRK is not set # CONFIG_PCI_CNB20LE_QUIRK is not set
# CONFIG_ISA_BUS is not set # CONFIG_ISA_BUS is not set
@@ -1959,7 +1947,6 @@ CONFIG_FIB_RULES=y
CONFIG_NET_9P=m CONFIG_NET_9P=m
CONFIG_NET_9P_FD=m CONFIG_NET_9P_FD=m
CONFIG_NET_9P_VIRTIO=m CONFIG_NET_9P_VIRTIO=m
CONFIG_NET_9P_XEN=m
# CONFIG_NET_9P_RDMA is not set # CONFIG_NET_9P_RDMA is not set
# CONFIG_NET_9P_DEBUG is not set # CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set # CONFIG_CAIF is not set
@@ -2009,7 +1996,6 @@ CONFIG_PCI_QUIRKS=y
CONFIG_PCI_REALLOC_ENABLE_AUTO=y CONFIG_PCI_REALLOC_ENABLE_AUTO=y
CONFIG_PCI_STUB=m CONFIG_PCI_STUB=m
# CONFIG_PCI_PF_STUB is not set # CONFIG_PCI_PF_STUB is not set
CONFIG_XEN_PCIDEV_FRONTEND=m
CONFIG_PCI_ATS=y CONFIG_PCI_ATS=y
CONFIG_PCI_LOCKLESS_CONFIG=y CONFIG_PCI_LOCKLESS_CONFIG=y
CONFIG_PCI_IOV=y CONFIG_PCI_IOV=y
@@ -2127,7 +2113,6 @@ CONFIG_ALLOW_DEV_COREDUMP=y
# CONFIG_DEBUG_DEVRES is not set # CONFIG_DEBUG_DEVRES is not set
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set # CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set # CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_SYS_HYPERVISOR=y
CONFIG_GENERIC_CPU_DEVICES=y CONFIG_GENERIC_CPU_DEVICES=y
CONFIG_GENERIC_CPU_AUTOPROBE=y CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_CPU_VULNERABILITIES=y CONFIG_GENERIC_CPU_VULNERABILITIES=y
@@ -2397,8 +2382,6 @@ CONFIG_CDROM_PKTCDVD=m
CONFIG_CDROM_PKTCDVD_BUFFERS=8 CONFIG_CDROM_PKTCDVD_BUFFERS=8
# CONFIG_CDROM_PKTCDVD_WCACHE is not set # CONFIG_CDROM_PKTCDVD_WCACHE is not set
CONFIG_ATA_OVER_ETH=m CONFIG_ATA_OVER_ETH=m
CONFIG_XEN_BLKDEV_FRONTEND=m
CONFIG_XEN_BLKDEV_BACKEND=m
CONFIG_VIRTIO_BLK=m CONFIG_VIRTIO_BLK=m
CONFIG_BLK_DEV_RBD=m CONFIG_BLK_DEV_RBD=m
# CONFIG_BLK_DEV_UBLK is not set # CONFIG_BLK_DEV_UBLK is not set
@@ -2602,7 +2585,6 @@ CONFIG_SCSI_FLASHPOINT=y
# CONFIG_SCSI_MYRB is not set # CONFIG_SCSI_MYRB is not set
# CONFIG_SCSI_MYRS is not set # CONFIG_SCSI_MYRS is not set
CONFIG_VMWARE_PVSCSI=m CONFIG_VMWARE_PVSCSI=m
CONFIG_XEN_SCSI_FRONTEND=m
CONFIG_HYPERV_STORAGE=m CONFIG_HYPERV_STORAGE=m
CONFIG_LIBFC=m CONFIG_LIBFC=m
CONFIG_LIBFCOE=m CONFIG_LIBFCOE=m
@@ -3408,8 +3390,6 @@ CONFIG_IEEE802154_ATUSB=m
# CONFIG_WWAN is not set # CONFIG_WWAN is not set
# end of Wireless WAN # end of Wireless WAN
CONFIG_XEN_NETDEV_FRONTEND=m
CONFIG_XEN_NETDEV_BACKEND=m
CONFIG_VMXNET3=m CONFIG_VMXNET3=m
CONFIG_FUJITSU_ES=m CONFIG_FUJITSU_ES=m
CONFIG_HYPERV_NET=m CONFIG_HYPERV_NET=m
@@ -3583,9 +3563,6 @@ CONFIG_N_GSM=m
CONFIG_NOZOMI=m CONFIG_NOZOMI=m
# CONFIG_NULL_TTY is not set # CONFIG_NULL_TTY is not set
CONFIG_HVC_DRIVER=y CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_SERIAL_DEV_BUS=y CONFIG_SERIAL_DEV_BUS=y
CONFIG_SERIAL_DEV_CTRL_TTYPORT=y CONFIG_SERIAL_DEV_CTRL_TTYPORT=y
# CONFIG_TTY_PRINTK is not set # CONFIG_TTY_PRINTK is not set
@@ -3636,7 +3613,6 @@ CONFIG_TCG_TIS_I2C_NUVOTON=m
CONFIG_TCG_NSC=m CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m CONFIG_TCG_INFINEON=m
CONFIG_TCG_XEN=m
CONFIG_TCG_CRB=m CONFIG_TCG_CRB=m
# CONFIG_TCG_VTPM_PROXY is not set # CONFIG_TCG_VTPM_PROXY is not set
# CONFIG_TCG_TIS_ST33ZP24_I2C is not set # CONFIG_TCG_TIS_ST33ZP24_I2C is not set
@@ -4412,7 +4388,6 @@ CONFIG_INTEL_MEI_WDT=m
CONFIG_NI903X_WDT=m CONFIG_NI903X_WDT=m
CONFIG_NIC7018_WDT=m CONFIG_NIC7018_WDT=m
CONFIG_MEN_A21_WDT=m CONFIG_MEN_A21_WDT=m
CONFIG_XEN_WDT=m
# #
# PCI-based Watchdog Cards # PCI-based Watchdog Cards
@@ -4868,7 +4843,6 @@ CONFIG_DRM_CIRRUS_QEMU=m
# CONFIG_TINYDRM_REPAPER is not set # CONFIG_TINYDRM_REPAPER is not set
# CONFIG_TINYDRM_ST7586 is not set # CONFIG_TINYDRM_ST7586 is not set
# CONFIG_TINYDRM_ST7735R is not set # CONFIG_TINYDRM_ST7735R is not set
# CONFIG_DRM_XEN_FRONTEND is not set
CONFIG_DRM_VBOXVIDEO=m CONFIG_DRM_VBOXVIDEO=m
# CONFIG_DRM_GUD is not set # CONFIG_DRM_GUD is not set
# CONFIG_DRM_SSD130X is not set # CONFIG_DRM_SSD130X is not set
@@ -4920,7 +4894,6 @@ CONFIG_FB_EFI=y
# CONFIG_FB_UDL is not set # CONFIG_FB_UDL is not set
# CONFIG_FB_IBM_GXT4500 is not set # CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set # CONFIG_FB_VIRTUAL is not set
CONFIG_XEN_FBDEV_FRONTEND=m
# CONFIG_FB_METRONOME is not set # CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set # CONFIG_FB_MB862XX is not set
CONFIG_FB_HYPERV=m CONFIG_FB_HYPERV=m
@@ -5132,7 +5105,6 @@ CONFIG_SND_PCMCIA=y
# CONFIG_SND_SOC is not set # CONFIG_SND_SOC is not set
CONFIG_SND_X86=y CONFIG_SND_X86=y
# CONFIG_HDMI_LPE_AUDIO is not set # CONFIG_HDMI_LPE_AUDIO is not set
# CONFIG_SND_XEN_FRONTEND is not set
CONFIG_SND_VIRTIO=m CONFIG_SND_VIRTIO=m
CONFIG_HID_SUPPORT=y CONFIG_HID_SUPPORT=y
CONFIG_HID=m CONFIG_HID=m
@@ -5349,7 +5321,6 @@ CONFIG_USB_R8A66597_HCD=m
# CONFIG_USB_HCD_BCMA is not set # CONFIG_USB_HCD_BCMA is not set
# CONFIG_USB_HCD_SSB is not set # CONFIG_USB_HCD_SSB is not set
# CONFIG_USB_HCD_TEST_MODE is not set # CONFIG_USB_HCD_TEST_MODE is not set
# CONFIG_USB_XEN_HCD is not set
# #
# USB Device Class drivers # USB Device Class drivers
@@ -6024,41 +5995,6 @@ CONFIG_HYPERV_UTILS=m
CONFIG_HYPERV_BALLOON=m CONFIG_HYPERV_BALLOON=m
# end of Microsoft Hyper-V guest support # end of Microsoft Hyper-V guest support
#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_BALLOON_MEMORY_HOTPLUG=y
CONFIG_XEN_MEMORY_HOTPLUG_LIMIT=512
CONFIG_XEN_SCRUB_PAGES_DEFAULT=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=m
CONFIG_XEN_GRANT_DEV_ALLOC=m
# CONFIG_XEN_GRANT_DMA_ALLOC is not set
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_PCI_STUB=y
CONFIG_XEN_PCIDEV_BACKEND=m
# CONFIG_XEN_PVCALLS_FRONTEND is not set
# CONFIG_XEN_PVCALLS_BACKEND is not set
CONFIG_XEN_SCSI_BACKEND=m
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_ACPI_PROCESSOR=m
CONFIG_XEN_MCE_LOG=y
CONFIG_XEN_HAVE_PVMMU=y
CONFIG_XEN_EFI=y
CONFIG_XEN_AUTO_XLATE=y
CONFIG_XEN_ACPI=y
CONFIG_XEN_SYMS=y
CONFIG_XEN_HAVE_VPMU=y
CONFIG_XEN_UNPOPULATED_ALLOC=y
# CONFIG_XEN_VIRTIO is not set
# end of Xen driver support
# CONFIG_GREYBUS is not set # CONFIG_GREYBUS is not set
# CONFIG_COMEDI is not set # CONFIG_COMEDI is not set
CONFIG_STAGING=y CONFIG_STAGING=y

View File

@@ -70,5 +70,8 @@ stage1:evdev
stage1:serio_raw stage1:serio_raw
stage1:serio stage1:serio
# zos core networking is with openvswitch vxlan over mycelium
openvswitch
# Keep stage2 empty; we only use stage1 in this build # Keep stage2 empty; we only use stage1 in this build
# stage2: (intentionally unused) # stage2: (intentionally unused)

View File

@@ -17,6 +17,8 @@ eudev-netifnames
kmod kmod
fuse3 fuse3
pciutils pciutils
efitools
efibootmgr
# Console/terminal management # Console/terminal management
util-linux util-linux
@@ -47,7 +49,3 @@ haveged
openssh-server openssh-server
zellij zellij
# Essential debugging and monitoring tools included
# NO development tools, NO curl/wget, NO python, NO redis
# NO massive linux-firmware package
# Other tools will be loaded with RFS after network connectivity

View File

@@ -8,12 +8,12 @@
S3_ENDPOINT="https://hub.grid.tf" S3_ENDPOINT="https://hub.grid.tf"
# AWS region string expected by the S3-compatible API # AWS region string expected by the S3-compatible API
S3_REGION="us-east-1" S3_REGION="garage"
# Bucket and key prefix used for RFS store (content-addressed blobs) # Bucket and key prefix used for RFS store (content-addressed blobs)
# The RFS store path will be: s3://.../<S3_BUCKET>/<S3_PREFIX> # The RFS store path will be: s3://.../<S3_BUCKET>/<S3_PREFIX>
S3_BUCKET="zos" S3_BUCKET="zos"
S3_PREFIX="zosbuilder/store" S3_PREFIX="zos/store"
# Access credentials (required by rfs pack to push blobs) # Access credentials (required by rfs pack to push blobs)
S3_ACCESS_KEY="REPLACE_ME" S3_ACCESS_KEY="REPLACE_ME"
@@ -36,10 +36,10 @@ MANIFESTS_SUBPATH="manifests"
# Behavior flags (can be overridden by CLI flags or env) # Behavior flags (can be overridden by CLI flags or env)
# Whether to keep s3:// store as a fallback entry in the .fl after adding WEB_ENDPOINT # Whether to keep s3:// store as a fallback entry in the .fl after adding WEB_ENDPOINT
KEEP_S3_FALLBACK="false" KEEP_S3_FALLBACK="true"
# Whether to attempt uploading .fl manifests to S3 (requires MinIO Client: mc) # Whether to attempt uploading .fl manifests to S3 (requires MinIO Client: mc)
UPLOAD_MANIFESTS="false" UPLOAD_MANIFESTS="true"
# Read-only credentials for route URL in manifest (optional; defaults to write keys above) # Read-only credentials for route URL in manifest (optional; defaults to write keys above)
# These will be embedded into the flist 'route.url' so runtime mounts can read directly from Garage. # These will be embedded into the flist 'route.url' so runtime mounts can read directly from Garage.
@@ -53,5 +53,27 @@ READ_SECRET_KEY="REPLACE_ME_READ"
# - ROUTE_PATH: path to the blob route (default: /blobs) # - ROUTE_PATH: path to the blob route (default: /blobs)
# - ROUTE_REGION: region string for Garage (default: garage) # - ROUTE_REGION: region string for Garage (default: garage)
ROUTE_ENDPOINT="https://hub.grid.tf" ROUTE_ENDPOINT="https://hub.grid.tf"
ROUTE_PATH="/blobs" ROUTE_PATH="/zos/store"
ROUTE_REGION="garage" ROUTE_REGION="garage"
# RESP/DB-style blob store (design-time placeholders; optional)
# Enable to allow pack scripts or future rfs CLI to upload blobs to a RESP-compatible store.
# This does not change the existing S3 flow; RESP acts as an additional backend.
#
# Example URI semantics (see docs/rfs-flists.md additions):
# resp://host:port/db?prefix=blobs
# resp+tls://host:port/db?prefix=blobs&amp;ca=/etc/ssl/certs/ca.pem
# resp+sentinel://sentinelHost:26379/mymaster?prefix=blobs
#
# Minimal keys for a direct RESP endpoint
RESP_ENABLED="false"
RESP_ENDPOINT="localhost:6379" # host:port
RESP_DB="0" # integer DB index
RESP_PREFIX="zos/blobs" # namespace/prefix for content-addressed keys
RESP_USERNAME="" # optional
RESP_PASSWORD="" # optional
RESP_TLS="false" # true/false
RESP_CA="" # path to CA bundle when RESP_TLS=true
# Optional: Sentinel topology (overrides RESP_ENDPOINT for discovery)
RESP_SENTINEL="" # sentinelHost:port (comma-separated for multiple)
RESP_MASTER="" # Sentinel master name (e.g., "mymaster")

View File

@@ -4,7 +4,9 @@
# Git repositories to clone and build # Git repositories to clone and build
git zinit https://github.com/threefoldtech/zinit master build_zinit git zinit https://github.com/threefoldtech/zinit master build_zinit
git mycelium https://github.com/threefoldtech/mycelium v0.6.1 build_mycelium git mycelium https://github.com/threefoldtech/mycelium v0.6.1 build_mycelium
git zosstorage https://git.ourworld.tf/delandtj/zosstorage main build_zosstorage
git youki git@github.com:youki-dev/youki.git v0.5.7 build_youki
git rfs https://github.com/threefoldtech/rfs development build_rfs
# Pre-built releases to download # Pre-built releases to download
release rfs https://github.com/threefoldtech/rfs/releases/download/v2.0.6/rfs v2.0.6 install_rfs # release rfs https://github.com/threefoldtech/rfs/releases/download/v2.0.6/rfs v2.0.6 install_rfs
release corex https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static 2.1.4 install_corex rename=corex release corex https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static 2.1.4 install_corex rename=corex

View File

@@ -5,4 +5,4 @@ if ! getent group dhcpcd >/dev/null 2>&1; then addgroup -S dhcpcd 2>/dev/null ||
if ! getent passwd dhcpcd >/dev/null 2>&1; then adduser -S -D -s /sbin/nologin -G dhcpcd dhcpcd 2>/dev/null || true; fi if ! getent passwd dhcpcd >/dev/null 2>&1; then adduser -S -D -s /sbin/nologin -G dhcpcd dhcpcd 2>/dev/null || true; fi
# Exec dhcpcd (will run as root if it cannot drop to dhcpcd user) # Exec dhcpcd (will run as root if it cannot drop to dhcpcd user)
interfaces=$(ip -br l | awk '!/lo/&&!/my0/{print $1}') interfaces=$(ip -br l | awk '!/lo/&&!/my0/{print $1}')
exec dhcpcd -B $interfaces exec dhcpcd -p -B $interfaces

View File

@@ -0,0 +1,32 @@
#!/bin/bash
# Simple script to create OVS database and start ovsdb-server
# Configuration
DATABASE=${DATABASE:-"/etc/openvswitch/conf.db"}
DBSCHEMA="/usr/share/openvswitch/vswitch.ovsschema"
DB_SOCKET=${DB_SOCKET:-"/var/run/openvswitch/db.sock"}
RUNDIR="/var/run/openvswitch"
# Create run directory
mkdir -p "$RUNDIR"
# Create database if it doesn't exist
if [ ! -e "$DATABASE" ]; then
echo "Creating database: $DATABASE"
ovsdb-tool create "$DATABASE" "$DBSCHEMA"
fi
# Check if database needs conversion
if [ "$(ovsdb-tool needs-conversion "$DATABASE" "$DBSCHEMA")" = "yes" ]; then
echo "Converting database: $DATABASE"
ovsdb-tool convert "$DATABASE" "$DBSCHEMA"
fi
# Start ovsdb-server
echo "Starting ovsdb-server..."
exec ovsdb-server \
--remote=punix:"$DB_SOCKET" \
--remote=db:Open_vSwitch,Open_vSwitch,manager_options \
--pidfile \
--detach \
"$DATABASE"

View File

@@ -1,6 +1,8 @@
exec: /usr/bin/mycelium --key-file /tmp/mycelium_priv_key.bin exec: /usr/bin/mycelium --key-file /var/cache/etc/mycelium_priv_key.bin
--tun-name my0 --silent --peers tcp://188.40.132.242:9651 tcp://136.243.47.186:9651 --tun-name my0 --silent --peers tcp://188.40.132.242:9651 tcp://136.243.47.186:9651
tcp://185.69.166.7:9651 tcp://185.69.166.8:9651 tcp://65.21.231.58:9651 tcp://65.109.18.113:9651 tcp://185.69.166.7:9651 tcp://185.69.166.8:9651 tcp://65.21.231.58:9651 tcp://65.109.18.113:9651
tcp://209.159.146.190:9651 tcp://5.78.122.16:9651 tcp://5.223.43.251:9651 tcp://142.93.217.194:9651 tcp://209.159.146.190:9651 tcp://5.78.122.16:9651 tcp://5.223.43.251:9651 tcp://142.93.217.194:9651
after: after:
- network - network
- udev-rfs
- zosstorage

View File

@@ -1,5 +1,6 @@
exec: sh /etc/zinit/init/network.sh eth0 exec: sh /etc/zinit/init/network.sh
after: after:
- depmod - depmod
- udevd - udevd
- udev-trigger - udev-trigger
test: ping -c1 www.google.com

View File

@@ -0,0 +1,4 @@
exec: /etc/zinit/init/ovsdb-server.sh
oneshot: true
after:
- zossstorage

View File

@@ -0,0 +1,4 @@
exec: /usr/bin/zosstorage -l debug
oneshot: true
after:
- rfs-modules

View File

@@ -84,7 +84,7 @@ Initramfs Assembly Key Functions
- Install init script: [bash.initramfs_install_init_script()](scripts/lib/initramfs.sh:74) - Install init script: [bash.initramfs_install_init_script()](scripts/lib/initramfs.sh:74)
- Installs [config/init](config/init) as /init in initramfs root. - Installs [config/init](config/init) as /init in initramfs root.
- Components copy: [bash.initramfs_copy_components()](scripts/lib/initramfs.sh:101) - Components copy: [bash.initramfs_copy_components()](scripts/lib/initramfs.sh:101)
- Installs built components (zinit/rfs/mycelium/corex) into proper locations, strips/UPX where applicable. - Installs built components (zinit/rfs/mycelium/corex/zosstorage) into proper locations, strips/UPX where applicable.
- Modules setup: [bash.initramfs_setup_modules()](scripts/lib/initramfs.sh:229) - Modules setup: [bash.initramfs_setup_modules()](scripts/lib/initramfs.sh:229)
- Reads [config/modules.conf](config/modules.conf), resolves deps via [bash.initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:318), generates stage1 list (firmware hints in modules.conf are ignored; firmware.conf is authoritative). - Reads [config/modules.conf](config/modules.conf), resolves deps via [bash.initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:318), generates stage1 list (firmware hints in modules.conf are ignored; firmware.conf is authoritative).
- Create module scripts: [bash.initramfs_create_module_scripts()](scripts/lib/initramfs.sh:427) - Create module scripts: [bash.initramfs_create_module_scripts()](scripts/lib/initramfs.sh:427)
@@ -176,9 +176,12 @@ Stage System and Incremental Rebuilds
- Shows stage status before/after marker removal; no --rebuild-from is passed by default (relies on markers only). - Shows stage status before/after marker removal; no --rebuild-from is passed by default (relies on markers only).
- Manual minimal rebuild: - Manual minimal rebuild:
- Remove relevant .done files, e.g.: initramfs_create.done initramfs_test.done validation.done - Remove relevant .done files, e.g.: initramfs_create.done initramfs_test.done validation.done
- Rerun: DEBUG=1 ./scripts/build.sh --skip-tests - Rerun: DEBUG=1 ./scripts/build.sh
- Show status: - Show status:
- ./scripts/build.sh --show-stages - ./scripts/build.sh --show-stages
- Test built kernel:
- ./runit.sh --hypervisor qemu
- ./runit.sh --hypervisor ch --disks 5 --reset
Key Decisions (current) Key Decisions (current)
- Firmware selection for initramfs comes exclusively from [config/firmware.conf](config/firmware.conf); firmware hints in modules.conf are ignored to avoid duplication/mismatch. - Firmware selection for initramfs comes exclusively from [config/firmware.conf](config/firmware.conf); firmware hints in modules.conf are ignored to avoid duplication/mismatch.
@@ -204,3 +207,80 @@ Change Log
- Normalize INSTALL_DIR/COMPONENTS_DIR/KERNEL_DIR/DIST_DIR to absolute paths post-config load. - Normalize INSTALL_DIR/COMPONENTS_DIR/KERNEL_DIR/DIST_DIR to absolute paths post-config load.
- Add validation diagnostics prints (input/PWD/PROJECT_ROOT/INSTALL_DIR/resolved). - Add validation diagnostics prints (input/PWD/PROJECT_ROOT/INSTALL_DIR/resolved).
- Ensure shadow package in container for passwd/chpasswd; keep openssl and openssl-dev; remove perl earlier. - Ensure shadow package in container for passwd/chpasswd; keep openssl and openssl-dev; remove perl earlier.
- 2025-10-10:
- Record zosstorage as a standard component installed during [bash.initramfs_copy_components()](scripts/lib/initramfs.sh:101) and sync component documentation references.
Updates 2025-10-01
- Function index regenerated: see [scripts/functionlist.md](scripts/functionlist.md) for an authoritative map of all functions with current line numbers. Use it alongside the quick links below to jump into code fast.
- Key jump-points (current lines):
- Finalization: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:568)
- CPIO creation: [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:691)
- Validation: [bash.initramfs_validate()](scripts/lib/initramfs.sh:820)
- Kernel embed config: [bash.kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:130)
- Stage orchestrator entry: [bash.main_build_process()](scripts/build.sh:214)
- Repo-wide index: [scripts/functionlist.md](scripts/functionlist.md)
Roadmap / TODO (tracked in tool todo list)
- Zosception (zinit service graph and ordering)
- Define additional services and ordering for nested/recursive orchestration.
- Likely integration points:
- Networking readiness before RFS: [config/zinit/network.yaml](config/zinit/network.yaml)
- Early udev coldplug: [config/zinit/udev-trigger.yaml](config/zinit/udev-trigger.yaml)
- Post-RFS coldplug: [config/zinit/udev-rfs.yaml](config/zinit/udev-rfs.yaml)
- Ensure dependency edges are correct in the service DAG image (see docs/img_*.png).
- Add zosstorage to initramfs
- Source:
- If packaged: add to [config/packages.list](config/packages.list).
- If built from source: extend [bash.components_parse_sources_conf()](scripts/lib/components.sh:13) and add a build_* function; install via [bash.initramfs_copy_components()](scripts/lib/initramfs.sh:102).
- Zinit unit:
- Add YAML under [config/zinit/](config/zinit/) and hook into the network-ready path.
- Ordering:
- Start after "network" and before/with RFS mounts if it provides storage functionality used by rfs.
- RFS blob store backends (design + docs; http and s3 exist)
- Current S3 store URI construction: [bash.rfs_common_build_s3_store_uri()](scripts/rfs/common.sh:137)
- Flist manifest store patching: [bash.rfs_common_patch_flist_stores()](scripts/rfs/common.sh:385)
- Route URL patching: [bash.rfs_common_patch_flist_route_url()](scripts/rfs/common.sh:494)
- Packers entrypoints:
- [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh:1)
- [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
- Proposed additional backend: RESP/DB-style store
- Goal: Allow rfs to push/fetch content-addressed blobs via a RESP-compatible endpoint (e.g., Redis/KeyDB/Dragonfly-like), or a thin HTTP/RESP adapter.
- Draft URI scheme examples:
- resp://host:port/db?tls=0&amp;prefix=blobs
- resp+tls://host:port/db?prefix=blobs&amp;ca=/etc/ssl/certs/ca.pem
- resp+sentinel://sentinelHost:26379/mymaster?prefix=blobs
- Minimum operations:
- PUT blob: SETEX prefix/ab/cd/hash ttl file-bytes or HSET prefix/hash data file-bytes
- GET blob: GET or HGET
- HEAD/exists: EXISTS
- Optional: pipelined/mget for batch prefetch
- Client integration layers:
- Pack-time: extend rfs CLI store resolver (design doc first; scripts/rfs/common.sh can map scheme→uploader if CLI not ready).
- Manifest post-process: still supported; stores table may include multiple URIs (s3 + resp) for redundancy.
- Caching and retries:
- Local on-disk cache under dist/.rfs-cache keyed by hash with LRU GC.
- Exponential backoff on GET failures; fall back across stores in order.
- Auth:
- RESP: optional username/password in URI; TLS with cert pinning parameters.
- Keep secrets in config/rfs.conf or env; do not embed write creds in manifests (read-credential routes only).
- Deliverables:
- Design section in docs/rfs-flists.md (to be added)
- Config keys in config/rfs.conf.example for RESP endpoints
- Optional shim uploader script if CLI support lags.
- Documentation refresh tasks
- Cross-check this files clickable references against [scripts/functionlist.md](scripts/functionlist.md) after changes in lib files.
- Keep “Branding behavior” and “Absolute Path Normalization” pointers aligned with:
- [bash.common.sh normalization](scripts/lib/common.sh:244)
- [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:568)
Diagnostics-first reminder
- Use DEBUG=1 and stage markers for minimal rebuilds.
- Quick commands:
- Show stages: ./scripts/build.sh --show-stages
- Minimal rebuild after zinit/init edits: [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh)
- Validate archive: [bash.initramfs_create_cpio()](scripts/lib/initramfs.sh:691), then [bash.initramfs_test_archive()](scripts/lib/initramfs.sh:953)

View File

@@ -107,7 +107,7 @@ Diagnostics-first workflow (strict)
- For any failure, first collect specific logs: - For any failure, first collect specific logs:
- Enable DEBUG=1 for verbose logs. - Enable DEBUG=1 for verbose logs.
- Re-run only the impacted stage if possible: - Re-run only the impacted stage if possible:
- Example: rm -f .build-stages/validation.done && DEBUG=1 ./scripts/build.sh --skip-tests - Example: rm -f .build-stages/validation.done && DEBUG=1 ./scripts/build.sh
- Use existing diagnostics: - Use existing diagnostics:
- Branding debug lines: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575) - Branding debug lines: [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:575)
- Validation debug lines (input, PWD, PROJECT_ROOT, INSTALL_DIR, resolved): [bash.initramfs_validate()](scripts/lib/initramfs.sh:799) - Validation debug lines (input, PWD, PROJECT_ROOT, INSTALL_DIR, resolved): [bash.initramfs_validate()](scripts/lib/initramfs.sh:799)
@@ -119,12 +119,15 @@ Common tasks and commands
- DEBUG=1 ./scripts/build.sh - DEBUG=1 ./scripts/build.sh
- Minimal rebuild of last steps: - Minimal rebuild of last steps:
- rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done - rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh --skip-tests - DEBUG=1 ./scripts/build.sh
- Validation only: - Validation only:
- rm -f .build-stages/validation.done - rm -f .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh --skip-tests - DEBUG=1 ./scripts/build.sh
- Show stage status: - Show stage status:
- ./scripts/build.sh --show-stages - ./scripts/build.sh --show-stages
- Test built kernel:
- ./runit.sh --hypervisor qemu
- ./runit.sh --hypervisor ch --disks 5
Checklists and helpers Checklists and helpers
@@ -154,7 +157,7 @@ C) Minimal rebuild after zinit/init/modules.conf changes
- Stage status is printed before/after marker removal; the helper avoids --rebuild-from by default to prevent running early stages. - Stage status is printed before/after marker removal; the helper avoids --rebuild-from by default to prevent running early stages.
- Manual fallback: - Manual fallback:
- rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done - rm -f .build-stages/initramfs_create.done .build-stages/initramfs_test.done .build-stages/validation.done
- DEBUG=1 ./scripts/build.sh --skip-tests - DEBUG=1 ./scripts/build.sh
D) INITRAMFS_ARCHIVE unbound during kernel build stage D) INITRAMFS_ARCHIVE unbound during kernel build stage
- stage_kernel_build now defaults INITRAMFS_ARCHIVE if unset: - stage_kernel_build now defaults INITRAMFS_ARCHIVE if unset:

73
docs/TODO.md Normal file
View File

@@ -0,0 +1,73 @@
# Zero-OS Builder Persistent TODO
This canonical checklist is the single source of truth for ongoing work. It mirrors the live task tracker but is versioned in-repo for review and PRs. Jump-points reference exact functions and files for quick triage.
## High-level
- [x] Regenerate repository function index: [scripts/functionlist.md](../scripts/functionlist.md)
- [x] Refresh NOTES with jump-points and roadmap: [docs/NOTES.md](NOTES.md)
- [x] Extend RFS design with RESP/DB-style backend: [docs/rfs-flists.md](rfs-flists.md)
- [x] Make Rust components Git fetch non-destructive: [bash.components_download_git()](../scripts/lib/components.sh:72)
- [ ] Update zinit config for "zosception" workflow: [config/zinit/](../config/zinit/)
- [ ] Add zosstorage to the initramfs (package/build/install + zinit unit)
- [ ] Validate via minimal rebuild and boot tests; refine depmod/udev docs
- [ ] Commit and push documentation and configuration updates (post-zosception/zosstorage)
## Zosception (zinit service graph and ordering)
- [ ] Define service graph changes and ordering constraints
- Reference current triggers:
- Early coldplug: [config/zinit/udev-trigger.yaml](../config/zinit/udev-trigger.yaml)
- Network readiness: [config/zinit/network.yaml](../config/zinit/network.yaml)
- Post-mount coldplug: [config/zinit/udev-rfs.yaml](../config/zinit/udev-rfs.yaml)
- [ ] Add/update units under [config/zinit/](../config/zinit/) with proper after/needs/wants
- [ ] Validate with stage runner and logs
- Stage runner: [bash.stage_run()](../scripts/lib/stages.sh:99)
- Main flow: [bash.main_build_process()](../scripts/build.sh:214)
## ZOS Storage in initramfs
- [ ] Decide delivery mechanism:
- [ ] APK via [config/packages.list](../config/packages.list)
- [ ] Source build via [bash.components_parse_sources_conf()](../scripts/lib/components.sh:13) with a new build function
- [ ] Copy/install into initramfs
- Components copy: [bash.initramfs_copy_components()](../scripts/lib/initramfs.sh:102)
- Zinit setup: [bash.initramfs_setup_zinit()](../scripts/lib/initramfs.sh:13)
- [ ] Create zinit unit(s) for zosstorage startup and ordering
- Place after network and before RFS if it provides storage used by rfs
- [ ] Add smoke command to confirm presence in image (e.g., which/--version) during [bash.initramfs_validate()](../scripts/lib/initramfs.sh:820)
## RFS backends — implementation follow-up (beyond design)
- [ ] RESP uploader shim for packers (non-breaking)
- Packers entrypoints: [scripts/rfs/pack-modules.sh](../scripts/rfs/pack-modules.sh), [scripts/rfs/pack-firmware.sh](../scripts/rfs/pack-firmware.sh)
- Config loader: [bash.rfs_common_load_rfs_s3_config()](../scripts/rfs/common.sh:82) → extend to parse RESP_* (non-breaking)
- Store URI builder (S3 exists): [bash.rfs_common_build_s3_store_uri()](../scripts/rfs/common.sh:137)
- Manifest patching remains:
- Stores table: [bash.rfs_common_patch_flist_stores()](../scripts/rfs/common.sh:385)
- route.url: [bash.rfs_common_patch_flist_route_url()](../scripts/rfs/common.sh:494)
- [ ] Connectivity checks and retries for RESP path
- [ ] Local cache for pack-time (optional)
## Validation and boot tests
- [ ] Minimal rebuild after zinit/units/edit
- Helper: [scripts/rebuild-after-zinit.sh](../scripts/rebuild-after-zinit.sh)
- [ ] Validate contents before CPIO
- Create: [bash.initramfs_create_cpio()](../scripts/lib/initramfs.sh:691)
- Validate: [bash.initramfs_validate()](../scripts/lib/initramfs.sh:820)
- [ ] QEMU / cloud-hypervisor smoke tests
- Test runner: [runit.sh](../runit.sh)
- [ ] Kernel embed path and versioning sanity
- Embed config: [bash.kernel_modify_config_for_initramfs()](../scripts/lib/kernel.sh:130)
- Full version logic: [bash.kernel_get_full_version()](../scripts/lib/kernel.sh:14)
## Operational conventions (keep)
- [ ] Diagnostics-first changes; add logs before fixes
- [ ] Absolute path normalization respected
- Normalization: [scripts/lib/common.sh](../scripts/lib/common.sh:244)
Notes
- Keep this file in sync with the live tracker. Reference it in PR descriptions.
- Use the clickable references above for rapid navigation.

View File

@@ -13,7 +13,6 @@ Key sourced libraries:
- [initramfs.sh](scripts/lib/initramfs.sh) - [initramfs.sh](scripts/lib/initramfs.sh)
- [stages.sh](scripts/lib/stages.sh) - [stages.sh](scripts/lib/stages.sh)
- [docker.sh](scripts/lib/docker.sh) - [docker.sh](scripts/lib/docker.sh)
- [testing.sh](scripts/lib/testing.sh)
Main stages executed (incremental via [stage_run()](scripts/lib/stages.sh:99)): Main stages executed (incremental via [stage_run()](scripts/lib/stages.sh:99)):
1) alpine_extract, alpine_configure, alpine_packages 1) alpine_extract, alpine_configure, alpine_packages

View File

@@ -165,3 +165,45 @@ Use the helper to inspect a manifest, optionally listing entries and testing a l
- scripts/rfs/verify-flist.sh -m dist/flists/firmware-latest.fl --tree - scripts/rfs/verify-flist.sh -m dist/flists/firmware-latest.fl --tree
- Inspect + mount test to a temp dir: - Inspect + mount test to a temp dir:
- sudo scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl --mount - sudo scripts/rfs/verify-flist.sh -m dist/flists/modules-6.12.44-Zero-OS.fl --mount
## Additional blob store backends (design)
This extends the existing S3/HTTP approach with a RESP/DB-style backend option for rfs blob storage. It is a design-only addition; CLI and scripts will be extended in a follow-up.
Scope
- Keep S3 flow intact via [scripts/rfs/common.sh](scripts/rfs/common.sh:137), [scripts/rfs/common.sh](scripts/rfs/common.sh:385), and [scripts/rfs/common.sh](scripts/rfs/common.sh:494).
- Introduce RESP URIs that can be encoded in config and, later, resolved by rfs or a thin uploader shim invoked by:
- [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh:1)
- [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh:1)
URI schemes (draft)
- resp://host:port/db?prefix=blobs
- resp+tls://host:port/db?prefix=blobs&amp;ca=/etc/ssl/certs/ca.pem
- resp+sentinel://sentinelHost:26379/mymaster?prefix=blobs
- Credentials may be provided via URI userinfo or config (recommended: config only).
Operations (minimal set)
- PUT blob: write content-addressed key (e.g., prefix/ab/cd/hash)
- GET blob: fetch by exact key
- Exists/HEAD: presence test by key
- Optional batching: pipelined MGET for prefetch
Config keys (see example additions in config/rfs.conf.example)
- RESP_ENDPOINT (host:port), RESP_DB (integer), RESP_PREFIX (path namespace)
- RESP_USERNAME/RESP_PASSWORD (optional), RESP_TLS=0/1 (+ RESP_CA if needed)
- RESP_SENTINEL and RESP_MASTER for sentinel deployments
Manifests and routes
- Keep S3 store in flist stores table (fallback) while enabling route.url patching to HTTP/S3 for read-only access:
- Patch stores table as today via [scripts/rfs/common.sh](scripts/rfs/common.sh:385)
- Patch route.url as today via [scripts/rfs/common.sh](scripts/rfs/common.sh:494)
- RESP may be used primarily for pack-time blob uploads or as an additional store the CLI can consume later.
Security
- Do not embed write credentials in manifests.
- Read-only credentials may be embedded in route.url if required, mirroring S3 pattern.
Next steps
- Implement RESP uploader shim called from pack scripts; keep the CLI S3 flow unchanged.
- Extend config loader in [scripts/rfs/common.sh](scripts/rfs/common.sh:82) to parse RESP_* variables.
- Add verification routines to sanity-check connectivity before pack.

263
runit.sh Executable file
View File

@@ -0,0 +1,263 @@
#!/bin/bash
set -e
# Default configuration
HYPERVISOR="${HYPERVISOR:-qemu}" # qemu or ch (cloud-hypervisor)
NUM_DISKS="${NUM_DISKS:-3}"
DISK_SIZE="${DISK_SIZE:-10G}"
DISK_PREFIX="zosvol"
KERNEL="dist/vmlinuz.efi"
MEMORY="2048"
BRIDGE="${BRIDGE:-zosbr}"
TAP_INTERFACE="${TAP_INTERFACE:-zos-tap0}"
# Parse command line arguments
RESET_VOLUMES=false
usage() {
cat <<EOF
Usage: $0 [OPTIONS]
Options:
-H, --hypervisor <qemu|ch> Choose hypervisor (default: qemu)
-d, --disks <N> Number of disks to create (default: 3)
-s, --disk-size <SIZE> Size of each disk (default: 10G)
-r, --reset Delete and recreate all volumes
-m, --memory <MB> Memory in MB (default: 2048)
-b, --bridge <NAME> Network bridge name (default: zosbr)
-h, --help Show this help message
Environment variables:
HYPERVISOR Same as --hypervisor
NUM_DISKS Same as --disks
DISK_SIZE Same as --disk-size
BRIDGE Same as --bridge
Examples:
$0 -H qemu -d 5
$0 --hypervisor ch --reset
$0 -d 4 -r
HYPERVISOR=ch NUM_DISKS=4 $0
EOF
exit 0
}
# Parse arguments using getopt
TEMP=$(getopt -o 'H:d:s:rm:b:h' --long 'hypervisor:,disks:,disk-size:,reset,memory:,bridge:,help' -n "$0" -- "$@")
if [ $? -ne 0 ]; then
echo "Error parsing arguments. Try '$0 --help' for more information."
exit 1
fi
eval set -- "$TEMP"
unset TEMP
while true; do
case "$1" in
'-H' | '--hypervisor')
HYPERVISOR="$2"
shift 2
;;
'-d' | '--disks')
NUM_DISKS="$2"
shift 2
;;
'-s' | '--disk-size')
DISK_SIZE="$2"
shift 2
;;
'-r' | '--reset')
RESET_VOLUMES=true
shift
;;
'-m' | '--memory')
MEMORY="$2"
shift 2
;;
'-b' | '--bridge')
BRIDGE="$2"
shift 2
;;
'-h' | '--help')
usage
;;
'--')
shift
break
;;
*)
echo "Internal error!"
exit 1
;;
esac
done
# Validate hypervisor choice
if [[ "$HYPERVISOR" != "qemu" && "$HYPERVISOR" != "ch" ]]; then
echo "Error: Invalid hypervisor '$HYPERVISOR'. Must be 'qemu' or 'ch'"
exit 1
fi
if [[ "$HYPERVISOR" == "qemu" ]]; then
DISK_SUFFIX="qcow"
DISK_FORMAT="qcow2"
else
DISK_SUFFIX="raw"
DISK_FORMAT="raw"
fi
# Validate kernel exists
if [[ ! -f "$KERNEL" ]]; then
echo "Error: Kernel not found at $KERNEL"
echo "Run the build first: ./scripts/build.sh"
exit 1
fi
# Setup TAP interface for cloud-hypervisor
setup_tap_interface() {
local tap="$1"
local bridge="$2"
echo "Setting up TAP interface $tap for cloud-hypervisor..."
if [[ -z "${CH_TAP_IP:-}" && -z "${CH_TAP_MASK:-}" ]]; then
echo " Reminder: set CH_TAP_IP and CH_TAP_MASK to configure $tap and silence cloud-hypervisor IP warnings."
fi
# Check if bridge exists
if ! ip link show "$bridge" &>/dev/null; then
echo "Warning: Bridge $bridge does not exist. Create it with:"
echo " sudo ip link add name $bridge type bridge"
echo " sudo ip link set $bridge up"
echo ""
read -p "Continue anyway? (y/N) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi
# Check if TAP already exists
if ip link show "$tap" &>/dev/null; then
echo " TAP interface $tap already exists"
# Check if it's already attached to bridge
if bridge link show | grep -q "$tap"; then
echo " $tap is already attached to bridge"
return 0
else
echo " Attaching $tap to bridge $bridge"
sudo ip link set dev "$tap" master "$bridge" 2>/dev/null || true
return 0
fi
fi
# Create new TAP interface
echo " Creating TAP interface $tap"
sudo ip tuntap add "$tap" mode tap user "$(whoami)"
# Bring it up
echo " Bringing up $tap"
sudo ip link set "$tap" up
# Attach to bridge
echo " Attaching $tap to bridge $bridge"
sudo ip link set dev "$tap" master "$bridge"
echo " TAP interface setup complete"
}
# Cleanup TAP interface on exit (for cloud-hypervisor)
cleanup_tap_interface() {
local tap="$1"
if [[ -n "$tap" ]] && ip link show "$tap" &>/dev/null; then
echo "Cleaning up TAP interface $tap"
sudo ip link delete "$tap" 2>/dev/null || true
fi
}
# Reset volumes if requested
if [[ "$RESET_VOLUMES" == "true" ]]; then
echo "Resetting volumes..."
for i in $(seq 0 $((NUM_DISKS - 1))); do
for suffix in qcow raw; do
vol="${DISK_PREFIX}${i}.${suffix}"
if [[ -f "$vol" ]]; then
echo " Removing $vol"
rm -f "$vol"
fi
done
done
fi
# Create disk volumes if they don't exist
echo "Ensuring $NUM_DISKS disk volume(s) exist..."
for i in $(seq 0 $((NUM_DISKS - 1))); do
vol="${DISK_PREFIX}${i}.${DISK_SUFFIX}"
if [[ ! -f "$vol" ]]; then
echo " Creating $vol (size: $DISK_SIZE, format: $DISK_FORMAT)"
qemu-img create -f "$DISK_FORMAT" "$vol" "$DISK_SIZE" >/dev/null
else
echo " $vol already exists"
fi
done
# Build disk arguments based on hypervisor
DISK_ARGS=()
if [[ "$HYPERVISOR" == "qemu" ]]; then
for i in $(seq 0 $((NUM_DISKS - 1))); do
vol="${DISK_PREFIX}${i}.${DISK_SUFFIX}"
DISK_ARGS+=("-drive" "file=$vol,format=$DISK_FORMAT,if=virtio")
done
elif [[ "$HYPERVISOR" == "ch" ]]; then
# cloud-hypervisor requires comma-separated disk list in single --disk argument
# Use raw format (no compression)
CH_DISKS=""
for i in $(seq 0 $((NUM_DISKS - 1))); do
vol="${DISK_PREFIX}${i}.${DISK_SUFFIX}"
if [[ -z "$CH_DISKS" ]]; then
CH_DISKS="path=$vol,iommu=on"
else
CH_DISKS="${CH_DISKS},path=$vol,iommu=on"
fi
done
if [[ -n "$CH_DISKS" ]]; then
DISK_ARGS+=("--disk" "$CH_DISKS")
fi
fi
# Launch the appropriate hypervisor
echo "Launching with $HYPERVISOR hypervisor..."
echo ""
if [[ "$HYPERVISOR" == "qemu" ]]; then
exec qemu-system-x86_64 \
-kernel "$KERNEL" \
-m "$MEMORY" \
-enable-kvm \
-cpu host \
-net nic,model=virtio \
-net bridge,br="$BRIDGE" \
"${DISK_ARGS[@]}" \
-chardev stdio,id=char0,mux=on,logfile=serial.log \
-serial chardev:char0 \
-mon chardev=char0 \
-append "console=ttyS0,115200n8 console=tty"
elif [[ "$HYPERVISOR" == "ch" ]]; then
# Setup TAP interface for cloud-hypervisor
setup_tap_interface "$TAP_INTERFACE" "$BRIDGE"
# Cleanup on exit
trap "cleanup_tap_interface '$TAP_INTERFACE'" EXIT
exec cloud-hypervisor \
--kernel "$KERNEL" \
--memory size="${MEMORY}M" \
--cpus boot=2 \
--net "tap=$TAP_INTERFACE" \
"${DISK_ARGS[@]}" \
--console off \
--serial tty \
--cmdline "console=ttyS0,115200n8"
fi

View File

@@ -15,7 +15,6 @@ source "${SCRIPT_DIR}/lib/alpine.sh"
source "${SCRIPT_DIR}/lib/components.sh" source "${SCRIPT_DIR}/lib/components.sh"
source "${SCRIPT_DIR}/lib/initramfs.sh" source "${SCRIPT_DIR}/lib/initramfs.sh"
source "${SCRIPT_DIR}/lib/kernel.sh" source "${SCRIPT_DIR}/lib/kernel.sh"
source "${SCRIPT_DIR}/lib/testing.sh"
# Build configuration loaded from config/build.conf via common.sh # Build configuration loaded from config/build.conf via common.sh
# Environment variables can override config file values # Environment variables can override config file values
@@ -42,7 +41,6 @@ ZINIT_CONFIG_DIR="${CONFIG_DIR}/zinit"
# Build options # Build options
USE_CONTAINER="${USE_CONTAINER:-auto}" USE_CONTAINER="${USE_CONTAINER:-auto}"
CLEAN_BUILD="${CLEAN_BUILD:-false}" CLEAN_BUILD="${CLEAN_BUILD:-false}"
SKIP_TESTS="${SKIP_TESTS:-false}"
KEEP_ARTIFACTS="${KEEP_ARTIFACTS:-false}" KEEP_ARTIFACTS="${KEEP_ARTIFACTS:-false}"
# Display usage information # Display usage information
@@ -54,7 +52,6 @@ Usage: $0 [OPTIONS]
Options: Options:
--clean Clean build (remove all artifacts first) --clean Clean build (remove all artifacts first)
--skip-tests Skip boot tests
--keep-artifacts Keep build artifacts after completion --keep-artifacts Keep build artifacts after completion
--force-rebuild Force rebuild all stages (ignore completion markers) --force-rebuild Force rebuild all stages (ignore completion markers)
--rebuild-from=STAGE Force rebuild from specific stage onward --rebuild-from=STAGE Force rebuild from specific stage onward
@@ -92,10 +89,6 @@ function parse_arguments() {
CLEAN_BUILD="true" CLEAN_BUILD="true"
shift shift
;; ;;
--skip-tests)
SKIP_TESTS="true"
shift
;;
--keep-artifacts) --keep-artifacts)
KEEP_ARTIFACTS="true" KEEP_ARTIFACTS="true"
shift shift
@@ -408,20 +401,18 @@ function main_build_process() {
log_debug "stage_kernel_build: defaulting INITRAMFS_ARCHIVE=${INITRAMFS_ARCHIVE}" log_debug "stage_kernel_build: defaulting INITRAMFS_ARCHIVE=${INITRAMFS_ARCHIVE}"
fi fi
# Ensure FULL_KERNEL_VERSION is set for versioned output filename
if [[ -z "${FULL_KERNEL_VERSION:-}" ]]; then
FULL_KERNEL_VERSION=$(kernel_get_full_version "$KERNEL_VERSION" "$KERNEL_CONFIG")
export FULL_KERNEL_VERSION
log_debug "stage_kernel_build: resolved FULL_KERNEL_VERSION=${FULL_KERNEL_VERSION}"
fi
kernel_build_with_initramfs "$KERNEL_CONFIG" "$INITRAMFS_ARCHIVE" "$kernel_output" kernel_build_with_initramfs "$KERNEL_CONFIG" "$INITRAMFS_ARCHIVE" "$kernel_output"
export KERNEL_OUTPUT="$kernel_output" export KERNEL_OUTPUT="$kernel_output"
} }
function stage_boot_tests() { # Boot tests removed - use runit.sh for testing instead
if [[ "$SKIP_TESTS" != "true" ]]; then
# Ensure KERNEL_OUTPUT is set (for incremental builds)
if [[ -z "${KERNEL_OUTPUT:-}" ]]; then
KERNEL_OUTPUT="${DIST_DIR}/vmlinuz.efi"
export KERNEL_OUTPUT
fi
testing_run_all "$KERNEL_OUTPUT"
fi
}
# Run all stages with incremental tracking # Run all stages with incremental tracking
stage_run "alpine_extract" stage_alpine_extract stage_run "alpine_extract" stage_alpine_extract
@@ -442,7 +433,6 @@ function main_build_process() {
stage_run "initramfs_create" stage_initramfs_create stage_run "initramfs_create" stage_initramfs_create
stage_run "initramfs_test" stage_initramfs_test stage_run "initramfs_test" stage_initramfs_test
stage_run "kernel_build" stage_kernel_build stage_run "kernel_build" stage_kernel_build
stage_run "boot_tests" stage_boot_tests
# Calculate build time # Calculate build time
local end_time=$(date +%s) local end_time=$(date +%s)
@@ -504,9 +494,6 @@ function main() {
# Pass through relevant arguments to container # Pass through relevant arguments to container
local container_args="" local container_args=""
if [[ "$SKIP_TESTS" == "true" ]]; then
container_args="$container_args --skip-tests"
fi
if [[ "$KEEP_ARTIFACTS" == "true" ]]; then if [[ "$KEEP_ARTIFACTS" == "true" ]]; then
container_args="$container_args --keep-artifacts" container_args="$container_args --keep-artifacts"
fi fi

View File

@@ -9,6 +9,7 @@ PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
# Container configuration # Container configuration
CONTAINER_NAME="zero-os-dev" CONTAINER_NAME="zero-os-dev"
BUILDER_IMAGE="zero-os-builder:latest" BUILDER_IMAGE="zero-os-builder:latest"
HOST_SSH_DIR="${SSH_MOUNT_DIR:-${HOME}/.ssh}"
# Default to verbose, streaming logs for dev flows unless explicitly disabled # Default to verbose, streaming logs for dev flows unless explicitly disabled
export DEBUG="${DEBUG:-1}" export DEBUG="${DEBUG:-1}"
@@ -88,18 +89,33 @@ function dev_container_start() {
log_info "Creating new development container: ${CONTAINER_NAME}" log_info "Creating new development container: ${CONTAINER_NAME}"
# Create persistent container with all necessary mounts and environment # Create persistent container with all necessary mounts and environment
safe_execute podman run -d \ local podman_args=(
--name "$CONTAINER_NAME" \ run -d
--privileged \ --name "$CONTAINER_NAME"
-v "${PROJECT_ROOT}:/workspace" \ --privileged
-w /workspace \ -v "${PROJECT_ROOT}:/workspace"
-e DEBUG=1 \ -v "$HOME/.ssh:root/.ssh"
-e ALPINE_VERSION=3.22 \ -w /workspace
-e KERNEL_VERSION=6.12.44 \ -e DEBUG=1
-e RUST_TARGET=x86_64-unknown-linux-musl \ -e ALPINE_VERSION=3.22
-e OPTIMIZATION_LEVEL=max \ -e KERNEL_VERSION=6.12.44
"$BUILDER_IMAGE" \ -e RUST_TARGET=x86_64-unknown-linux-musl
-e OPTIMIZATION_LEVEL=max
)
if [[ -d "$HOST_SSH_DIR" ]]; then
log_info "Mounting SSH directory: ${HOST_SSH_DIR} -> /root/.ssh (read-only)"
podman_args+=(-v "${HOST_SSH_DIR}:/root/.ssh:ro")
else
log_warn "SSH directory not found at ${HOST_SSH_DIR}; skipping SSH mount"
fi
podman_args+=(
"$BUILDER_IMAGE"
sleep infinity sleep infinity
)
safe_execute podman "${podman_args[@]}"
log_info "Development container started successfully" log_info "Development container started successfully"
log_info "Container name: ${CONTAINER_NAME}" log_info "Container name: ${CONTAINER_NAME}"

View File

@@ -1,55 +1,141 @@
# Function List - scripts/lib Library # Function List - Repository (scripts and libraries)
This document lists all functions currently defined under [scripts/lib](scripts/lib) with their source locations. This document lists functions defined under scripts/ and scripts/lib with source locations.
Regenerated from repository on 2025-10-01.
## alpine.sh - Alpine Linux operations ## Top-level build scripts
File: [scripts/lib/alpine.sh](scripts/lib/alpine.sh)
- [alpine_extract_miniroot()](scripts/lib/alpine.sh:14) - Download and extract Alpine miniroot File: [scripts/build.sh](scripts/build.sh)
- [alpine_setup_chroot()](scripts/lib/alpine.sh:70) - Setup chroot mounts and resolv.conf - [show_usage()](scripts/build.sh:49)
- [alpine_cleanup_chroot()](scripts/lib/alpine.sh:115) - Unmount chroot mounts - [parse_arguments()](scripts/build.sh:88)
- [alpine_install_packages()](scripts/lib/alpine.sh:142) - Install packages from packages.list - [setup_build_environment()](scripts/build.sh:133)
- [alpine_aggressive_cleanup()](scripts/lib/alpine.sh:211) - Reduce image size by removing docs/locales/etc - [verify_configuration_files()](scripts/build.sh:174)
- [alpine_configure_repos()](scripts/lib/alpine.sh:321) - Configure APK repositories - [main_build_process()](scripts/build.sh:214)
- [alpine_configure_system()](scripts/lib/alpine.sh:339) - Configure hostname, hosts, timezone, profile - [stage_alpine_extract()](scripts/build.sh:223)
- [alpine_install_firmware()](scripts/lib/alpine.sh:392) - Install required firmware packages - [stage_alpine_configure()](scripts/build.sh:227)
- [stage_alpine_packages()](scripts/build.sh:232)
- [stage_alpine_firmware()](scripts/build.sh:236)
- [stage_components_build()](scripts/build.sh:240)
- [stage_components_verify()](scripts/build.sh:244)
- [stage_kernel_modules()](scripts/build.sh:248)
- [stage_zinit_setup()](scripts/build.sh:265)
- [stage_init_script()](scripts/build.sh:269)
- [stage_components_copy()](scripts/build.sh:273)
- [stage_modules_setup()](scripts/build.sh:277)
- [stage_modules_copy()](scripts/build.sh:286)
- [stage_rfs_flists()](scripts/build.sh:299)
- [stage_cleanup()](scripts/build.sh:366)
- [stage_validation()](scripts/build.sh:370)
- [stage_initramfs_create()](scripts/build.sh:374)
- [stage_initramfs_test()](scripts/build.sh:385)
- [stage_kernel_build()](scripts/build.sh:398)
- [stage_boot_tests()](scripts/build.sh:415)
- [main()](scripts/build.sh:470)
File: [scripts/clean.sh](scripts/clean.sh)
- [show_usage()](scripts/clean.sh:21)
- [parse_arguments()](scripts/clean.sh:50)
- [clean_build_artifacts()](scripts/clean.sh:90)
- [clean_downloads()](scripts/clean.sh:127)
- [clean_container_images()](scripts/clean.sh:155)
- [show_space_recovery()](scripts/clean.sh:176)
- [verify_cleanup()](scripts/clean.sh:203)
- [main()](scripts/clean.sh:240)
File: [scripts/dev-container.sh](scripts/dev-container.sh)
- [show_usage()](scripts/dev-container.sh:19)
- [ensure_builder_image()](scripts/dev-container.sh:44)
- [dev_container_start()](scripts/dev-container.sh:70)
- [dev_container_stop()](scripts/dev-container.sh:109)
- [dev_container_shell()](scripts/dev-container.sh:121)
- [dev_container_build()](scripts/dev-container.sh:139)
- [dev_container_clean()](scripts/dev-container.sh:168)
- [dev_container_status()](scripts/dev-container.sh:180)
- [dev_container_logs()](scripts/dev-container.sh:202)
- [main()](scripts/dev-container.sh:214)
File: [scripts/make-grub-usb.sh](scripts/make-grub-usb.sh)
- [error()](scripts/make-grub-usb.sh:45)
- [info()](scripts/make-grub-usb.sh:46)
- [warn()](scripts/make-grub-usb.sh:47)
- [die()](scripts/make-grub-usb.sh:48)
- [require_root()](scripts/make-grub-usb.sh:50)
- [command_exists()](scripts/make-grub-usb.sh:54)
- [parse_args()](scripts/make-grub-usb.sh:56)
- [confirm_dangerous()](scripts/make-grub-usb.sh:81)
- [check_prereqs()](scripts/make-grub-usb.sh:93)
- [resolve_defaults()](scripts/make-grub-usb.sh:101)
- [umount_partitions()](scripts/make-grub-usb.sh:116)
- [partition_device_gpt()](scripts/make-grub-usb.sh:139)
- [format_esp()](scripts/make-grub-usb.sh:158)
- [mount_esp()](scripts/make-grub-usb.sh:165)
- [install_grub()](scripts/make-grub-usb.sh:171)
- [copy_kernel_initrd()](scripts/make-grub-usb.sh:180)
- [write_grub_cfg()](scripts/make-grub-usb.sh:190)
- [cleanup()](scripts/make-grub-usb.sh:226)
- [main()](scripts/make-grub-usb.sh:235)
File: [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh)
- [log()](scripts/rebuild-after-zinit.sh:17)
- [in_container()](scripts/rebuild-after-zinit.sh:67)
- [check_dir_changed()](scripts/rebuild-after-zinit.sh:81)
- [list_some_changes()](scripts/rebuild-after-zinit.sh:89)
- [compute_full_kver()](scripts/rebuild-after-zinit.sh:131)
- [modules_dir_for_full()](scripts/rebuild-after-zinit.sh:146)
File: [scripts/test.sh](scripts/test.sh)
- [show_usage()](scripts/test.sh:20)
- [parse_arguments()](scripts/test.sh:46)
- [run_tests()](scripts/test.sh:105)
- [main()](scripts/test.sh:182)
## Library scripts
## common.sh - Core utilities
File: [scripts/lib/common.sh](scripts/lib/common.sh) File: [scripts/lib/common.sh](scripts/lib/common.sh)
- [log_info()](scripts/lib/common.sh:31) - [log_info()](scripts/lib/common.sh:31)
- [log_warn()](scripts/lib/common.sh:36) - [log_warn()](scripts/lib/common.sh:36)
- [log_error()](scripts/lib/common.sh:41) - [log_error()](scripts/lib/common.sh:41)
- [log_debug()](scripts/lib/common.sh:46) - [log_debug()](scripts/lib/common.sh:46)
- [safe_execute()](scripts/lib/common.sh:54) - [safe_execute()](scripts/lib/common.sh:54)
- [section_header()](scripts/lib/common.sh:79) - [safe_execute_stream()](scripts/lib/common.sh:77)
- [command_exists()](scripts/lib/common.sh:89) - [section_header()](scripts/lib/common.sh:87)
- [in_container()](scripts/lib/common.sh:94) - [command_exists()](scripts/lib/common.sh:97)
- [check_dependencies()](scripts/lib/common.sh:99) - [in_container()](scripts/lib/common.sh:102)
- [safe_mkdir()](scripts/lib/common.sh:142) - [check_dependencies()](scripts/lib/common.sh:107)
- [safe_rmdir()](scripts/lib/common.sh:149) - [safe_mkdir()](scripts/lib/common.sh:150)
- [safe_copy()](scripts/lib/common.sh:158) - [safe_rmdir()](scripts/lib/common.sh:157)
- [is_absolute_path()](scripts/lib/common.sh:166) - [safe_copy()](scripts/lib/common.sh:166)
- [resolve_path()](scripts/lib/common.sh:171) - [is_absolute_path()](scripts/lib/common.sh:174)
- [get_file_size()](scripts/lib/common.sh:181) - [resolve_path()](scripts/lib/common.sh:179)
- [wait_for_file()](scripts/lib/common.sh:191) - [get_file_size()](scripts/lib/common.sh:189)
- [cleanup_on_exit()](scripts/lib/common.sh:205) - [wait_for_file()](scripts/lib/common.sh:199)
- [cleanup_on_exit()](scripts/lib/common.sh:213)
File: [scripts/lib/alpine.sh](scripts/lib/alpine.sh)
- [alpine_extract_miniroot()](scripts/lib/alpine.sh:14)
- [alpine_setup_chroot()](scripts/lib/alpine.sh:70)
- [alpine_cleanup_chroot()](scripts/lib/alpine.sh:115)
- [alpine_install_packages()](scripts/lib/alpine.sh:142)
- [alpine_aggressive_cleanup()](scripts/lib/alpine.sh:211)
- [alpine_configure_repos()](scripts/lib/alpine.sh:321)
- [alpine_configure_system()](scripts/lib/alpine.sh:339)
- [alpine_install_firmware()](scripts/lib/alpine.sh:392)
## components.sh - Component management
File: [scripts/lib/components.sh](scripts/lib/components.sh) File: [scripts/lib/components.sh](scripts/lib/components.sh)
- [components_parse_sources_conf()](scripts/lib/components.sh:13) - [components_parse_sources_conf()](scripts/lib/components.sh:13)
- [components_download_git()](scripts/lib/components.sh:72) - [components_download_git()](scripts/lib/components.sh:72)
- [components_download_release()](scripts/lib/components.sh:104) - [components_download_release()](scripts/lib/components.sh:174)
- [components_process_extra_options()](scripts/lib/components.sh:144) - [components_process_extra_options()](scripts/lib/components.sh:214)
- [components_build_component()](scripts/lib/components.sh:183) - [components_build_component()](scripts/lib/components.sh:253)
- [components_setup_rust_env()](scripts/lib/components.sh:217) - [components_setup_rust_env()](scripts/lib/components.sh:287)
- [build_zinit()](scripts/lib/components.sh:252) - [build_zinit()](scripts/lib/components.sh:322)
- [build_rfs()](scripts/lib/components.sh:299) - [build_rfs()](scripts/lib/components.sh:369)
- [build_mycelium()](scripts/lib/components.sh:346) - [build_mycelium()](scripts/lib/components.sh:417)
- [install_rfs()](scripts/lib/components.sh:386) - [install_rfs()](scripts/lib/components.sh:457)
- [install_corex()](scripts/lib/components.sh:409) - [install_corex()](scripts/lib/components.sh:480)
- [components_verify_installation()](scripts/lib/components.sh:436) - [components_verify_installation()](scripts/lib/components.sh:507)
- [components_cleanup()](scripts/lib/components.sh:472) - [components_cleanup()](scripts/lib/components.sh:543)
## docker.sh - Container runtime management
File: [scripts/lib/docker.sh](scripts/lib/docker.sh) File: [scripts/lib/docker.sh](scripts/lib/docker.sh)
- [docker_detect_runtime()](scripts/lib/docker.sh:14) - [docker_detect_runtime()](scripts/lib/docker.sh:14)
- [docker_verify_rootless()](scripts/lib/docker.sh:33) - [docker_verify_rootless()](scripts/lib/docker.sh:33)
@@ -58,36 +144,34 @@ File: [scripts/lib/docker.sh](scripts/lib/docker.sh)
- [docker_start_rootless()](scripts/lib/docker.sh:116) - [docker_start_rootless()](scripts/lib/docker.sh:116)
- [docker_run_build()](scripts/lib/docker.sh:154) - [docker_run_build()](scripts/lib/docker.sh:154)
- [docker_commit_builder()](scripts/lib/docker.sh:196) - [docker_commit_builder()](scripts/lib/docker.sh:196)
- [docker_cleanup()](scripts/lib/docker.sh:208) - [docker_cleanup()](scripts/lib/docker.sh:209)
- [docker_check_capabilities()](scripts/lib/docker.sh:248) - [docker_check_capabilities()](scripts/lib/docker.sh:248)
- [docker_setup_rootless()](scripts/lib/docker.sh:279) - [docker_setup_rootless()](scripts/lib/docker.sh:279)
## initramfs.sh - Initramfs assembly
File: [scripts/lib/initramfs.sh](scripts/lib/initramfs.sh) File: [scripts/lib/initramfs.sh](scripts/lib/initramfs.sh)
- [initramfs_setup_zinit()](scripts/lib/initramfs.sh:13) - [initramfs_setup_zinit()](scripts/lib/initramfs.sh:13)
- [initramfs_install_init_script()](scripts/lib/initramfs.sh:70) - [initramfs_install_init_script()](scripts/lib/initramfs.sh:75)
- [initramfs_copy_components()](scripts/lib/initramfs.sh:97) - [initramfs_copy_components()](scripts/lib/initramfs.sh:102)
- [initramfs_setup_modules()](scripts/lib/initramfs.sh:225) - [initramfs_setup_modules()](scripts/lib/initramfs.sh:230)
- [initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:313) - [initramfs_resolve_module_dependencies()](scripts/lib/initramfs.sh:312)
- [initramfs_create_module_scripts()](scripts/lib/initramfs.sh:422) - [resolve_single_module()](scripts/lib/initramfs.sh:348)
- [initramfs_strip_and_upx()](scripts/lib/initramfs.sh:486) - [initramfs_create_module_scripts()](scripts/lib/initramfs.sh:421)
- [initramfs_finalize_customization()](scripts/lib/initramfs.sh:569) - [initramfs_strip_and_upx()](scripts/lib/initramfs.sh:485)
- [initramfs_create_cpio()](scripts/lib/initramfs.sh:642) - [initramfs_finalize_customization()](scripts/lib/initramfs.sh:568)
- [initramfs_validate()](scripts/lib/initramfs.sh:710) - [initramfs_create_cpio()](scripts/lib/initramfs.sh:691)
- [initramfs_test_archive()](scripts/lib/initramfs.sh:809) - [initramfs_validate()](scripts/lib/initramfs.sh:820)
- [initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:846) - [initramfs_test_archive()](scripts/lib/initramfs.sh:953)
- [initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:991)
## kernel.sh - Kernel building
File: [scripts/lib/kernel.sh](scripts/lib/kernel.sh) File: [scripts/lib/kernel.sh](scripts/lib/kernel.sh)
- [kernel_get_full_version()](scripts/lib/kernel.sh:14) - [kernel_get_full_version()](scripts/lib/kernel.sh:14)
- [kernel_download_source()](scripts/lib/kernel.sh:28) - [kernel_download_source()](scripts/lib/kernel.sh:28)
- [kernel_apply_config()](scripts/lib/kernel.sh:82) - [kernel_apply_config()](scripts/lib/kernel.sh:82)
- [kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:129) - [kernel_modify_config_for_initramfs()](scripts/lib/kernel.sh:130)
- [kernel_build_with_initramfs()](scripts/lib/kernel.sh:174) - [kernel_build_with_initramfs()](scripts/lib/kernel.sh:174)
- [kernel_build_modules()](scripts/lib/kernel.sh:228) - [kernel_build_modules()](scripts/lib/kernel.sh:243)
- [kernel_cleanup()](scripts/lib/kernel.sh:284) - [kernel_cleanup()](scripts/lib/kernel.sh:298)
## stages.sh - Build stage tracking
File: [scripts/lib/stages.sh](scripts/lib/stages.sh) File: [scripts/lib/stages.sh](scripts/lib/stages.sh)
- [stages_init()](scripts/lib/stages.sh:12) - [stages_init()](scripts/lib/stages.sh:12)
- [stage_is_completed()](scripts/lib/stages.sh:33) - [stage_is_completed()](scripts/lib/stages.sh:33)
@@ -97,16 +181,46 @@ File: [scripts/lib/stages.sh](scripts/lib/stages.sh)
- [stage_run()](scripts/lib/stages.sh:99) - [stage_run()](scripts/lib/stages.sh:99)
- [stages_status()](scripts/lib/stages.sh:134) - [stages_status()](scripts/lib/stages.sh:134)
## testing.sh - Boot testing
File: [scripts/lib/testing.sh](scripts/lib/testing.sh) File: [scripts/lib/testing.sh](scripts/lib/testing.sh)
- [testing_qemu_boot()](scripts/lib/testing.sh:14) - [testing_qemu_boot()](scripts/lib/testing.sh:14)
- [testing_qemu_basic_boot()](scripts/lib/testing.sh:55) - [testing_qemu_basic_boot()](scripts/lib/testing.sh:55)
- [testing_qemu_serial_boot()](scripts/lib/testing.sh:90) - [testing_qemu_serial_boot()](scripts/lib/testing.sh:90)
- [testing_qemu_interactive_boot()](scripts/lib/testing.sh:113) - [testing_qemu_interactive_boot()](scripts/lib/testing.sh:114)
- [testing_cloud_hypervisor_boot()](scripts/lib/testing.sh:135) - [testing_cloud_hypervisor_boot()](scripts/lib/testing.sh:135)
- [testing_cloud_hypervisor_basic()](scripts/lib/testing.sh:171) - [testing_cloud_hypervisor_basic()](scripts/lib/testing.sh:172)
- [testing_cloud_hypervisor_serial()](scripts/lib/testing.sh:206) - [testing_cloud_hypervisor_serial()](scripts/lib/testing.sh:206)
- [testing_analyze_boot_log()](scripts/lib/testing.sh:227) - [testing_analyze_boot_log()](scripts/lib/testing.sh:228)
- [testing_run_all()](scripts/lib/testing.sh:299) - [testing_run_all()](scripts/lib/testing.sh:299)
## RFS tooling
File: [scripts/rfs/common.sh](scripts/rfs/common.sh)
- [rfs_common_project_root()](scripts/rfs/common.sh:12)
- [rfs_common_load_build_kernel_version()](scripts/rfs/common.sh:42)
- [rfs_common_load_rfs_s3_config()](scripts/rfs/common.sh:82)
- [rfs_common_build_s3_store_uri()](scripts/rfs/common.sh:137)
- [rfs_common_locate_rfs()](scripts/rfs/common.sh:171)
- [rfs_common_require_sqlite3()](scripts/rfs/common.sh:198)
- [rfs_common_locate_modules_dir()](scripts/rfs/common.sh:214)
- [rfs_common_locate_firmware_dir()](scripts/rfs/common.sh:244)
- [rfs_common_validate_modules_metadata()](scripts/rfs/common.sh:264)
- [rfs_common_install_all_alpine_firmware_packages()](scripts/rfs/common.sh:298)
- [rfs_common_patch_flist_stores()](scripts/rfs/common.sh:385)
- [rfs_common_build_route_url()](scripts/rfs/common.sh:453)
- [rfs_common_patch_flist_route_url()](scripts/rfs/common.sh:494)
- [rfs_common_prepare_output()](scripts/rfs/common.sh:525)
- [rfs_common_firmware_tag()](scripts/rfs/common.sh:533)
File: [scripts/rfs/pack-modules.sh](scripts/rfs/pack-modules.sh)
- [section()](scripts/rfs/pack-modules.sh:15)
File: [scripts/rfs/pack-firmware.sh](scripts/rfs/pack-firmware.sh)
- [section()](scripts/rfs/pack-firmware.sh:17)
File: [scripts/rfs/verify-flist.sh](scripts/rfs/verify-flist.sh)
- [usage()](scripts/rfs/verify-flist.sh:11)
- [section()](scripts/rfs/verify-flist.sh:26)
Notes:
- Line numbers reflect current repository state; re-run generation after edits.
- Nested/local functions are included under their parent section when applicable.

View File

@@ -195,6 +195,26 @@ function get_file_size() {
fi fi
} }
# Get short git commit hash from a git repository directory
function get_git_commit_hash() {
local repo_dir="$1"
local short="${2:-true}" # Default to short hash
if [[ ! -d "$repo_dir/.git" ]]; then
echo "unknown"
return 1
fi
local hash
if [[ "$short" == "true" ]]; then
hash=$(cd "$repo_dir" && git rev-parse --short HEAD 2>/dev/null || echo "unknown")
else
hash=$(cd "$repo_dir" && git rev-parse HEAD 2>/dev/null || echo "unknown")
fi
echo "$hash"
}
# Wait for file to exist with timeout # Wait for file to exist with timeout
function wait_for_file() { function wait_for_file() {
local file="$1" local file="$1"

View File

@@ -34,32 +34,41 @@ function components_parse_sources_conf() {
local component_count=0 local component_count=0
# Hardcode known components to bypass parsing issues for now # Read entries from sources.conf (TYPE NAME URL VERSION BUILD_FUNCTION [EXTRA])
log_info "Building ThreeFold components (hardcoded for reliability)" while IFS= read -r _raw || [[ -n "$_raw" ]]; do
# Strip comments and trim whitespace
local line="${_raw%%#*}"
line="${line#"${line%%[![:space:]]*}"}"
line="${line%"${line##*[![:space:]]}"}"
[[ -z "$line" ]] && continue
# Component 1: zinit local type name url version build_func extra
component_count=$((component_count + 1)) # shellcheck disable=SC2086
log_info "Processing component ${component_count}: zinit (git)" read -r type name url version build_func extra <<<"$line"
components_download_git "zinit" "https://github.com/threefoldtech/zinit" "master" "$components_dir"
components_build_component "zinit" "build_zinit" "$components_dir"
# Component 2: mycelium if [[ -z "${type:-}" || -z "${name:-}" || -z "${url:-}" || -z "${version:-}" || -z "${build_func:-}" ]]; then
component_count=$((component_count + 1)) log_warn "Skipping malformed entry: ${_raw}"
log_info "Processing component ${component_count}: mycelium (git)" continue
components_download_git "mycelium" "https://github.com/threefoldtech/mycelium" "v0.6.1" "$components_dir" fi
components_build_component "mycelium" "build_mycelium" "$components_dir"
# Component 3: rfs (pre-built release)
component_count=$((component_count + 1)) component_count=$((component_count + 1))
log_info "Processing component ${component_count}: rfs (release)" log_info "Processing component ${component_count}: ${name} (${type})"
components_download_git "rfs" "https://github.com/threefoldtech/rfs" "development" "$components_dir"
components_build_component "rfs" "build_rfs" "$components_dir"
# Component 4: corex case "$type" in
component_count=$((component_count + 1)) git)
log_info "Processing component ${component_count}: corex (release)" components_download_git "$name" "$url" "$version" "$components_dir"
components_download_release "corex" "https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static" "2.1.4" "$components_dir" "rename=corex" ;;
components_build_component "corex" "install_corex" "$components_dir" release)
components_download_release "$name" "$url" "$version" "$components_dir" "$extra"
;;
*)
log_error "Unknown component type in sources.conf: ${type}"
return 1
;;
esac
components_build_component "$name" "$build_func" "$components_dir"
done <"$sources_file"
if [[ $component_count -eq 0 ]]; then if [[ $component_count -eq 0 ]]; then
log_warn "No components found in sources configuration" log_warn "No components found in sources configuration"
@@ -68,7 +77,7 @@ function components_parse_sources_conf() {
fi fi
} }
# Download Git repository # Download Git repository (reuse tree; only reclone if invalid or version not reachable)
function components_download_git() { function components_download_git() {
local name="$1" local name="$1"
local url="$2" local url="$2"
@@ -80,24 +89,94 @@ function components_download_git() {
local target_dir="${components_dir}/${name}" local target_dir="${components_dir}/${name}"
log_info "Repository: ${url}" log_info "Repository: ${url}"
log_info "Version/Branch: ${version}" log_info "Version/Branch/Tag: ${version}"
log_info "Target directory: ${target_dir}" log_info "Target directory: ${target_dir}"
# Always do fresh clone to avoid git state issues # Ensure parent exists
if [[ -d "$target_dir" ]]; then safe_mkdir "$components_dir"
log_info "Removing existing ${name} directory for fresh clone"
safe_execute rm -rf "$target_dir" # Decide whether we can reuse the existing working tree
local need_fresh_clone="0"
if [[ -d "$target_dir/.git" ]]; then
if ! git -C "$target_dir" rev-parse --is-inside-work-tree >/dev/null 2>&1; then
log_warn "Existing ${name} directory is not a valid git repo; will reclone"
need_fresh_clone="1"
fi
elif [[ -d "$target_dir" ]]; then
log_warn "Existing ${name} directory without .git; will reclone"
need_fresh_clone="1"
fi fi
log_info "Cloning ${name} from ${url}" if [[ "$need_fresh_clone" == "1" || ! -d "$target_dir" ]]; then
safe_execute git clone --depth 1 --branch "$version" "$url" "$target_dir" log_info "Cloning ${name} (fresh) from ${url}"
safe_execute git clone "$url" "$target_dir"
fi
# Verify checkout # Ensure origin URL is correct (do not delete the tree if URL changed)
safe_execute cd "$target_dir" local current_url
local current_ref=$(git rev-parse HEAD) current_url=$(git -C "$target_dir" remote get-url origin 2>/dev/null || echo "")
log_info "Current commit: ${current_ref}" if [[ -n "$current_url" && "$current_url" != "$url" ]]; then
log_info "Updating origin URL: ${current_url} -> ${url}"
safe_execute git -C "$target_dir" remote set-url origin "$url"
elif [[ -z "$current_url" ]]; then
log_info "Setting origin URL to ${url}"
safe_execute git -C "$target_dir" remote add origin "$url" || true
fi
log_info "Git component download complete: ${name}" # Fetch updates and tags
safe_execute git -C "$target_dir" fetch --tags --prune origin
# Resolve desired commit for the requested version/branch/tag
local desired_rev=""
if git -C "$target_dir" rev-parse --verify "${version}^{commit}" >/dev/null 2>&1; then
desired_rev=$(git -C "$target_dir" rev-parse --verify "${version}^{commit}")
elif git -C "$target_dir" rev-parse --verify "origin/${version}^{commit}" >/dev/null 2>&1; then
desired_rev=$(git -C "$target_dir" rev-parse --verify "origin/${version}^{commit}")
else
log_warn "Version '${version}' not directly resolvable; fetching explicitly"
if git -C "$target_dir" fetch origin "${version}" --depth 1; then
desired_rev=$(git -C "$target_dir" rev-parse --verify FETCH_HEAD)
fi
fi
# Fallback: shallow clone at the requested ref if we still can't resolve
if [[ -z "$desired_rev" ]]; then
log_warn "Could not resolve revision for '${version}'. Performing fresh shallow clone at requested ref."
safe_execute rm -rf "${target_dir}.tmp"
if safe_execute git clone --depth 1 --branch "$version" "$url" "${target_dir}.tmp"; then
safe_execute rm -rf "$target_dir"
safe_execute mv "${target_dir}.tmp" "$target_dir"
desired_rev=$(git -C "$target_dir" rev-parse HEAD)
else
log_error "Failed to clone ${url} at '${version}'"
return 1
fi
fi
local current_rev
current_rev=$(git -C "$target_dir" rev-parse HEAD 2>/dev/null || echo "")
log_info "Current commit: ${current_rev:-<none>}"
log_info "Desired commit: ${desired_rev}"
if [[ -n "$current_rev" && "$current_rev" == "$desired_rev" ]]; then
log_info "Repository already at requested version; reusing working tree"
else
log_info "Checking out requested version"
# Prefer named refs when available; otherwise detach to exact commit
if git -C "$target_dir" show-ref --verify --quiet "refs/heads/${version}"; then
safe_execute git -C "$target_dir" checkout -f "${version}"
elif git -C "$target_dir" show-ref --verify --quiet "refs/remotes/origin/${version}"; then
safe_execute git -C "$target_dir" checkout -f -B "${version}" "origin/${version}"
elif git -C "$target_dir" show-ref --verify --quiet "refs/tags/${version}"; then
safe_execute git -C "$target_dir" checkout -f "tags/${version}"
else
safe_execute git -C "$target_dir" checkout -f --detach "${desired_rev}"
fi
# Initialize submodules if present (non-fatal)
safe_execute git -C "$target_dir" submodule update --init --recursive || true
fi
log_info "Git component ready: ${name} @ $(git -C "$target_dir" rev-parse --short HEAD)"
} }
# Download release binary/archive # Download release binary/archive
@@ -328,7 +407,6 @@ function build_rfs() {
return 1 return 1
fi fi
# remove rust-toolchain.toml, as not needed with latest release # remove rust-toolchain.toml, as not needed with latest release
safe_execute rm rust-toolchain.toml
# Build with musl target # Build with musl target
safe_execute cargo build --release --target "$RUST_TARGET" --features build-binary safe_execute cargo build --release --target "$RUST_TARGET" --features build-binary
@@ -343,6 +421,93 @@ function build_rfs() {
log_info "Built rfs binary (${binary_size}) at: ${binary_path}" log_info "Built rfs binary (${binary_size}) at: ${binary_path}"
} }
# Build function for zosstorage (standard Rust build)
function build_zosstorage() {
local name="$1"
local component_dir="$2"
section_header "Building zosstorage with musl target"
components_setup_rust_env
log_info "Building zosstorage from: ${component_dir}"
if [[ ! -d "$component_dir" ]]; then
log_error "Component directory not found: ${component_dir}"
return 1
fi
log_info "Executing: cd $component_dir"
cd "$component_dir" || {
log_error "Failed to change to directory: $component_dir"
return 1
}
local current_dir
current_dir=$(pwd)
log_info "Current directory: ${current_dir}"
if [[ ! -f "Cargo.toml" ]]; then
log_error "Cargo.toml not found in: ${current_dir}"
return 1
fi
safe_execute cargo build --release --target "$RUST_TARGET"
local binary_path="target/${RUST_TARGET}/release/zosstorage"
if [[ ! -f "$binary_path" ]]; then
log_error "zosstorage binary not found at: ${binary_path}"
return 1
fi
local binary_size
binary_size=$(get_file_size "$binary_path")
log_info "Built zosstorage binary (${binary_size}) at: ${binary_path}"
}
# Build function for youki (standard Rust build)
function build_youki() {
local name="$1"
local component_dir="$2"
section_header "Building youki with musl target"
components_setup_rust_env
log_info "Building youki from: ${component_dir}"
if [[ ! -d "$component_dir" ]]; then
log_error "Component directory not found: ${component_dir}"
return 1
fi
log_info "Executing: cd $component_dir"
cd "$component_dir" || {
log_error "Failed to change to directory: $component_dir"
return 1
}
local current_dir
current_dir=$(pwd)
log_info "Current directory: ${current_dir}"
if [[ ! -f "Cargo.toml" ]]; then
log_error "Cargo.toml not found in: ${current_dir}"
return 1
fi
safe_execute cargo build --release --target "$RUST_TARGET"
local binary_path="target/${RUST_TARGET}/release/youki"
if [[ ! -f "$binary_path" ]]; then
log_error "youki binary not found at: ${binary_path}"
return 1
fi
local binary_size
binary_size=$(get_file_size "$binary_path")
log_info "Built youki binary (${binary_size}) at: ${binary_path}"
}
# Build function for mycelium (special subdirectory build) # Build function for mycelium (special subdirectory build)
function build_mycelium() { function build_mycelium() {
local name="$1" local name="$1"
@@ -439,29 +604,65 @@ function components_verify_installation() {
section_header "Verifying Component Build" section_header "Verifying Component Build"
# List of expected built binaries and their locations in components directory local ok_count=0
local expected_binaries=(
"zinit/target/x86_64-unknown-linux-musl/release/zinit"
"rfs/target/x86_64-unknown-linux-musl/release/rfs"
"mycelium/myceliumd/target/x86_64-unknown-linux-musl/release/mycelium"
"corex/corex"
)
local missing_count=0 local missing_count=0
for binary in "${expected_binaries[@]}"; do # zinit
local full_path="${components_dir}/${binary}" local zinit_bin="${components_dir}/zinit/target/x86_64-unknown-linux-musl/release/zinit"
if [[ -f "$full_path" && -x "$full_path" ]]; then if [[ -x "$zinit_bin" ]]; then
local size=$(get_file_size "$full_path") log_info "✓ zinit ($(get_file_size "$zinit_bin")) at: ${zinit_bin#${components_dir}/}"
log_info "✓ Built ${binary##*/} (${size}) at: ${binary}" ((ok_count++))
else else
log_error "Missing or not executable: ${binary}" log_error "zinit missing: ${zinit_bin#${components_dir}/}"
((missing_count++))
fi
# rfs: accept both built and prebuilt locations
local rfs_built="${components_dir}/rfs/target/x86_64-unknown-linux-musl/release/rfs"
local rfs_release="${components_dir}/rfs/rfs"
if [[ -x "$rfs_built" ]]; then
log_info "✓ rfs (built) ($(get_file_size "$rfs_built")) at: ${rfs_built#${components_dir}/}"
((ok_count++))
elif [[ -x "$rfs_release" ]]; then
log_info "✓ rfs (release) ($(get_file_size "$rfs_release")) at: ${rfs_release#${components_dir}/}"
((ok_count++))
else
log_error "✗ rfs missing: checked rfs/target/.../rfs and rfs/rfs"
((missing_count++))
fi
# mycelium
local mycelium_bin="${components_dir}/mycelium/myceliumd/target/x86_64-unknown-linux-musl/release/mycelium"
if [[ -x "$mycelium_bin" ]]; then
log_info "✓ mycelium ($(get_file_size "$mycelium_bin")) at: ${mycelium_bin#${components_dir}/}"
((ok_count++))
else
log_error "✗ mycelium missing: ${mycelium_bin#${components_dir}/}"
((missing_count++))
fi
# zosstorage
local zosstorage_bin="${components_dir}/zosstorage/target/x86_64-unknown-linux-musl/release/zosstorage"
if [[ -x "$zosstorage_bin" ]]; then
log_info "✓ zosstorage ($(get_file_size "$zosstorage_bin")) at: ${zosstorage_bin#${components_dir}/}"
((ok_count++))
else
log_error "✗ zosstorage missing: ${zosstorage_bin#${components_dir}/}"
((missing_count++))
fi
# corex
local corex_bin="${components_dir}/corex/corex"
if [[ -x "$corex_bin" ]]; then
log_info "✓ corex ($(get_file_size "$corex_bin")) at: ${corex_bin#${components_dir}/}"
((ok_count++))
else
log_error "✗ corex missing: ${corex_bin#${components_dir}/}"
((missing_count++)) ((missing_count++))
fi fi
done
if [[ $missing_count -eq 0 ]]; then if [[ $missing_count -eq 0 ]]; then
log_info "All components built successfully" log_info "All components built/installed successfully"
return 0 return 0
else else
log_error "${missing_count} components missing or failed to build" log_error "${missing_count} components missing or failed to build"
@@ -495,7 +696,7 @@ function components_cleanup() {
export -f components_parse_sources_conf export -f components_parse_sources_conf
export -f components_download_git components_download_release components_process_extra_options export -f components_download_git components_download_release components_process_extra_options
export -f components_build_component components_setup_rust_env export -f components_build_component components_setup_rust_env
export -f build_zinit build_rfs build_mycelium install_corex export -f build_zinit build_rfs build_zosstorage build_mycelium install_corex
export -f components_verify_installation components_cleanup export -f components_verify_installation components_cleanup
# Export functions for install_rfs # Export functions for install_rfs
export -f install_rfs export -f install_rfs

View File

@@ -187,6 +187,34 @@ function initramfs_copy_components() {
((missing_count++)) ((missing_count++))
fi fi
# Copy zosstorage to /usr/bin
local zosstorage_binary="${components_dir}/zosstorage/target/x86_64-unknown-linux-musl/release/zosstorage"
if [[ -f "$zosstorage_binary" ]]; then
safe_mkdir "${initramfs_dir}/usr/bin"
safe_execute cp "$zosstorage_binary" "${initramfs_dir}/usr/bin/zosstorage"
safe_execute chmod +x "${initramfs_dir}/usr/bin/zosstorage"
local original_size=$(get_file_size "${initramfs_dir}/usr/bin/zosstorage")
if strip "${initramfs_dir}/usr/bin/zosstorage" 2>/dev/null || true; then
log_debug "Stripped zosstorage"
else
log_debug "zosstorage already stripped or strip failed"
fi
if command_exists "upx" && upx --best --force "${initramfs_dir}/usr/bin/zosstorage" >/dev/null 2>&1 || true; then
log_debug "UPX compressed zosstorage"
else
log_debug "UPX failed or already compressed"
fi
local final_size=$(get_file_size "${initramfs_dir}/usr/bin/zosstorage")
log_info "✓ Copied zosstorage ${original_size}${final_size} to /usr/bin/zosstorage"
((copied_count++))
else
log_error "✗ zosstorage binary not found: ${zosstorage_binary}"
((missing_count++))
fi
# Copy corex to /usr/bin # Copy corex to /usr/bin
local corex_binary="${components_dir}/corex/corex" local corex_binary="${components_dir}/corex/corex"
if [[ -f "$corex_binary" ]]; then if [[ -f "$corex_binary" ]]; then
@@ -617,7 +645,6 @@ EOF
log_info "Branding enabled: updating /etc/issue to Zero-OS branding" log_info "Branding enabled: updating /etc/issue to Zero-OS branding"
cat > "${initramfs_dir}/etc/issue" << 'EOF' cat > "${initramfs_dir}/etc/issue" << 'EOF'
Zero-OS \r \m Zero-OS \r \m
Built on \l
EOF EOF
else else
@@ -780,7 +807,7 @@ function initramfs_create_cpio() {
case "$compression" in case "$compression" in
"xz") "xz")
log_info "Creating XZ compressed CPIO archive" log_info "Creating XZ compressed CPIO archive"
safe_execute find . -print0 | cpio -o -H newc -0 | xz -${XZ_COMPRESSION_LEVEL} --check=crc32 > "$output_file_abs" safe_execute find . -print0 | cpio -o -H newc -0 | xz -T 8 -${XZ_COMPRESSION_LEVEL} --check=crc32 > "$output_file_abs"
;; ;;
"gzip"|"gz") "gzip"|"gz")
log_info "Creating gzip compressed CPIO archive" log_info "Creating gzip compressed CPIO archive"
@@ -930,6 +957,7 @@ function initramfs_validate() {
local component_binaries=( local component_binaries=(
"usr/bin/rfs" "usr/bin/rfs"
"usr/bin/mycelium" "usr/bin/mycelium"
"usr/bin/zosstorage"
"usr/bin/corex" "usr/bin/corex"
) )

View File

@@ -224,12 +224,33 @@ function kernel_build_with_initramfs() {
safe_mkdir "$output_dir" safe_mkdir "$output_dir"
safe_copy "$kernel_image" "$output_abs" safe_copy "$kernel_image" "$output_abs"
# Also copy with versioned filename including kernel version and zinit hash
local full_kernel_version="${FULL_KERNEL_VERSION:-unknown}"
local zinit_hash="unknown"
local zinit_dir="${COMPONENTS_DIR:-${PROJECT_ROOT}/components}/zinit"
if [[ -d "$zinit_dir/.git" ]]; then
zinit_hash=$(get_git_commit_hash "$zinit_dir")
else
log_warn "zinit git directory not found at ${zinit_dir}, using 'unknown' for hash"
fi
# Create versioned filename: vmlinuz-{VERSION}-{ZINIT_HASH}.efi
local versioned_name="vmlinuz-${full_kernel_version}-${zinit_hash}.efi"
local versioned_output="${output_dir}/${versioned_name}"
safe_copy "$kernel_image" "$versioned_output"
# Verify final kernel # Verify final kernel
local kernel_size local kernel_size
kernel_size=$(get_file_size "$output_abs") kernel_size=$(get_file_size "$output_abs")
local versioned_size
versioned_size=$(get_file_size "$versioned_output")
log_info "Kernel build complete:" log_info "Kernel build complete:"
log_info " Output file: ${output_abs}" log_info " Output file: ${output_abs}"
log_info " Versioned: ${versioned_output}"
log_info " Kernel size: ${kernel_size}" log_info " Kernel size: ${kernel_size}"
log_info " Version: ${full_kernel_version}"
log_info " zinit hash: ${zinit_hash}"
# Verify initramfs is embedded # Verify initramfs is embedded
if strings "$output_file" | grep -q "initramfs"; then if strings "$output_file" | grep -q "initramfs"; then
@@ -237,6 +258,184 @@ function kernel_build_with_initramfs() {
else else
log_warn "Initramfs embedding verification inconclusive" log_warn "Initramfs embedding verification inconclusive"
fi fi
# Upload versioned kernel to S3 if enabled
kernel_upload_to_s3 "$versioned_output" "$full_kernel_version" "$zinit_hash"
}
# Upload versioned kernel to S3 using MinIO client (mcli/mc)
function kernel_upload_to_s3() {
local kernel_file="$1"
local kernel_version="$2"
local zinit_hash="$3"
section_header "Uploading Kernel to S3"
# Check if upload is enabled
if [[ "${UPLOAD_KERNEL:-false}" != "true" ]]; then
log_info "UPLOAD_KERNEL not enabled; skipping kernel upload"
return 0
fi
# Verify kernel file exists
if [[ ! -f "$kernel_file" ]]; then
log_error "Kernel file not found: ${kernel_file}"
return 1
fi
# Load S3 configuration from rfs.conf
local rfs_conf="${PROJECT_ROOT}/config/rfs.conf"
local rfs_conf_example="${PROJECT_ROOT}/config/rfs.conf.example"
if [[ -f "$rfs_conf" ]]; then
# shellcheck source=/dev/null
source "$rfs_conf"
log_info "Loaded S3 config from: ${rfs_conf}"
elif [[ -f "$rfs_conf_example" ]]; then
# shellcheck source=/dev/null
source "$rfs_conf_example"
log_warn "Using example S3 config: ${rfs_conf_example}"
else
log_error "No S3 config found (config/rfs.conf or config/rfs.conf.example)"
return 1
fi
# Validate required S3 variables
for var in S3_ENDPOINT S3_BUCKET S3_PREFIX S3_ACCESS_KEY S3_SECRET_KEY; do
if [[ -z "${!var}" ]]; then
log_error "Missing required S3 variable: ${var}"
return 1
fi
done
# Detect MinIO client binary (mcli or mc)
local mcli_bin=""
if command -v mcli >/dev/null 2>&1; then
mcli_bin="mcli"
elif command -v mc >/dev/null 2>&1; then
mcli_bin="mc"
else
log_warn "MinIO Client not found (expected mcli or mc); skipping kernel upload"
return 0
fi
log_info "Using MinIO client: ${mcli_bin}"
# Setup S3 alias
log_info "Configuring S3 alias..."
safe_execute "${mcli_bin}" alias set rfs "${S3_ENDPOINT}" "${S3_ACCESS_KEY}" "${S3_SECRET_KEY}"
# Construct destination path: rfs/{bucket}/{prefix}/kernel/{versioned_filename}
local kernel_filename
kernel_filename=$(basename "$kernel_file")
local kernel_subpath="${KERNEL_SUBPATH:-kernel}"
local mcli_dst="rfs/${S3_BUCKET}/${S3_PREFIX%/}/${kernel_subpath%/}/${kernel_filename}"
# Upload kernel
log_info "Uploading: ${kernel_file} -> ${mcli_dst}"
safe_execute "${mcli_bin}" cp "${kernel_file}" "${mcli_dst}"
log_info "✓ Kernel uploaded successfully"
log_info " Version: ${kernel_version}"
log_info " zinit: ${zinit_hash}"
log_info " S3 path: ${mcli_dst}"
# Generate and upload kernel index
kernel_generate_index "${mcli_bin}" "${S3_BUCKET}" "${S3_PREFIX}" "${kernel_subpath}"
}
# Generate kernel index file from S3 listing and upload it
function kernel_generate_index() {
local mcli_bin="$1"
local bucket="$2"
local prefix="$3"
local kernel_subpath="$4"
section_header "Generating Kernel Index"
# Construct S3 path for listing
local s3_path="rfs/${bucket}/${prefix%/}/${kernel_subpath%/}/"
log_info "Listing kernels from: ${s3_path}"
# List all files in the kernel directory
local ls_output
if ! ls_output=$("${mcli_bin}" ls "${s3_path}" 2>&1); then
log_warn "Failed to list S3 kernel directory, index not generated"
log_debug "mcli ls output: ${ls_output}"
return 0
fi
# Parse output and extract kernel filenames matching vmlinuz-*
local kernels=()
while IFS= read -r line; do
# mcli ls output format: [DATE TIME TZ] SIZE FILENAME
# Extract filename (last field)
local filename
filename=$(echo "$line" | awk '{print $NF}')
# Filter for vmlinuz files (both .efi and without extension)
if [[ "$filename" =~ ^vmlinuz-.* ]]; then
kernels+=("$filename")
fi
done <<< "$ls_output"
if [[ ${#kernels[@]} -eq 0 ]]; then
log_warn "No kernels found in S3 path: ${s3_path}"
return 0
fi
log_info "Found ${#kernels[@]} kernel(s)"
# Create index files in dist directory
local index_dir="${DIST_DIR:-${PROJECT_ROOT}/dist}"
local text_index="${index_dir}/kernels.txt"
local json_index="${index_dir}/kernels.json"
# Generate text index (one kernel per line, sorted)
printf "%s\n" "${kernels[@]}" | sort -r > "$text_index"
log_info "Created text index: ${text_index}"
# Generate JSON index (array of kernel filenames)
{
echo "{"
echo " \"kernels\": ["
local first=true
for kernel in $(printf "%s\n" "${kernels[@]}" | sort -r); do
if [[ "$first" == "true" ]]; then
first=false
else
echo ","
fi
printf " \"%s\"" "$kernel"
done
echo ""
echo " ],"
echo " \"updated\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ)\","
echo " \"count\": ${#kernels[@]}"
echo "}"
} > "$json_index"
log_info "Created JSON index: ${json_index}"
# Upload both index files to S3
log_info "Uploading kernel index files to S3..."
local text_dst="${s3_path}kernels.txt"
local json_dst="${s3_path}kernels.json"
if safe_execute "${mcli_bin}" cp "$text_index" "$text_dst"; then
log_info "✓ Uploaded text index: ${text_dst}"
else
log_warn "Failed to upload text index"
fi
if safe_execute "${mcli_bin}" cp "$json_index" "$json_dst"; then
log_info "✓ Uploaded JSON index: ${json_dst}"
else
log_warn "Failed to upload JSON index"
fi
log_info "Kernel index generation complete"
} }
# Build and install modules in container for proper dependency resolution # Build and install modules in container for proper dependency resolution

View File

@@ -218,13 +218,9 @@ build_from_args=()
if in_container; then if in_container; then
# Run directly when already inside the dev/build container # Run directly when already inside the dev/build container
if [[ "$run_tests" -eq 1 ]]; then # Note: Tests are run separately using runit.sh, not during build
log "Including boot tests (in-container)" log "Running rebuild (in-container) - use runit.sh for testing"
DEBUG=1 "${PROJECT_ROOT}/scripts/build.sh" "${build_from_args[@]}" "${extra_args[@]}" DEBUG=1 "${PROJECT_ROOT}/scripts/build.sh" "${build_from_args[@]}" "${extra_args[@]}"
else
log "Skipping boot tests (in-container)"
DEBUG=1 "${PROJECT_ROOT}/scripts/build.sh" --skip-tests "${build_from_args[@]}" "${extra_args[@]}"
fi
else else
# Not in container: delegate to dev-container manager which ensures container exists and is running # Not in container: delegate to dev-container manager which ensures container exists and is running
devctl="${PROJECT_ROOT}/scripts/dev-container.sh" devctl="${PROJECT_ROOT}/scripts/dev-container.sh"
@@ -234,11 +230,7 @@ else
exit 1 exit 1
fi fi
if [[ "$run_tests" -eq 1 ]]; then # Note: Tests are run separately using runit.sh, not during build
log "Including boot tests via dev-container" log "Running rebuild via dev-container - use runit.sh for testing"
"$devctl" build "${build_from_args[@]}" "${extra_args[@]}" "$devctl" build "${build_from_args[@]}" "${extra_args[@]}"
else
log "Skipping boot tests via dev-container"
"$devctl" build --skip-tests "${build_from_args[@]}" "${extra_args[@]}"
fi
fi fi

155
scripts/rfs/pack-tree.sh Executable file
View File

@@ -0,0 +1,155 @@
#!/bin/bash
# Pack an arbitrary directory tree into an RFS flist and upload blobs to S3 (Garage)
# - Uses config from config/rfs.conf (or rfs.conf.example fallback)
# - Packs the specified directory (default: repository root)
# - Patches manifest route URL with read-only S3 creds
# - Optionally patches stores to WEB_ENDPOINT and uploads the .fl via MinIO client
#
# Usage:
# scripts/rfs/pack-tree.sh [-p PATH] [-n MANIFEST_BASENAME] [--web-endpoint URL] [--keep-s3-fallback] [--no-upload]
# Examples:
# scripts/rfs/pack-tree.sh
# scripts/rfs/pack-tree.sh -p ./components/zinit -n zinit-src
# WEB_ENDPOINT=https://hub.grid.tf/zos/zosbuilder/store scripts/rfs/pack-tree.sh -p dist --keep-s3-fallback
set -euo pipefail
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
source "${HERE}/common.sh"
section() { echo -e "\n==== $* ====\n"; }
print_help() {
cat <<EOF
Pack a directory to an RFS flist and upload blobs to S3, using config/rfs.conf.
Options:
-p, --path PATH Source directory to pack (default: repository root)
-n, --name NAME Manifest base name (without .fl). Default: tree-<basename(PATH)>-<YYYYMMDDHHMMSS>
--web-endpoint URL Override WEB_ENDPOINT for stores patching (default from rfs.conf)
--keep-s3-fallback Keep s3:// store rows alongside HTTPS store in manifest
--no-upload Do not upload the .fl manifest via MinIO client even if enabled in config
-h, --help Show this help
Environment/config:
Reads S3 and route settings from config/rfs.conf (or rfs.conf.example).
Honors: WEB_ENDPOINT, KEEP_S3_FALLBACK, UPLOAD_MANIFESTS.
EOF
}
# Defaults
SRC_PATH=""
MANIFEST_NAME=""
ARG_WEB_ENDPOINT=""
ARG_KEEP_S3="false"
ARG_NO_UPLOAD="false"
# Parse args
while [[ $# -gt 0 ]]; do
case "$1" in
-p|--path)
SRC_PATH="$2"; shift 2;;
-n|--name)
MANIFEST_NAME="$2"; shift 2;;
--web-endpoint)
ARG_WEB_ENDPOINT="$2"; shift 2;;
--keep-s3-fallback)
ARG_KEEP_S3="true"; shift 1;;
--no-upload)
ARG_NO_UPLOAD="true"; shift 1;;
-h|--help)
print_help; exit 0;;
*)
echo "Unknown argument: $1" >&2; print_help; exit 1;;
esac
done
# Determine PROJECT_ROOT and defaults
PROJECT_ROOT="${PROJECT_ROOT:-$(rfs_common_project_root)}"
if [[ -z "${SRC_PATH}" ]]; then
SRC_PATH="${PROJECT_ROOT}"
fi
# Normalize SRC_PATH to absolute
if [[ "${SRC_PATH}" != /* ]]; then
SRC_PATH="$(cd "${SRC_PATH}" && pwd)"
fi
if [[ ! -d "${SRC_PATH}" ]]; then
log_error "Source path is not a directory: ${SRC_PATH}"
exit 1
fi
# Compute default manifest name if not given
if [[ -z "${MANIFEST_NAME}" ]]; then
base="$(basename "${SRC_PATH}")"
ts="$(date -u +%Y%m%d%H%M%S)"
MANIFEST_NAME="tree-${base}-${ts}"
fi
MANIFEST_FILE="${MANIFEST_NAME%.fl}.fl"
section "Loading RFS and kernel configuration"
# Kernel version for consistent logs (not strictly required for generic pack)
rfs_common_load_build_kernel_version
rfs_common_load_rfs_s3_config
# Allow CLI override for WEB_ENDPOINT and KEEP_S3_FALLBACK
if [[ -n "${ARG_WEB_ENDPOINT}" ]]; then
WEB_ENDPOINT="${ARG_WEB_ENDPOINT}"
fi
if [[ "${ARG_KEEP_S3}" == "true" ]]; then
KEEP_S3_FALLBACK="true"
fi
rfs_common_build_s3_store_uri
rfs_common_locate_rfs
# Prepare output
MANIFEST_PATH="$(rfs_common_prepare_output "${MANIFEST_FILE}")"
section "Packing directory to flist"
log_info "Source path: ${SRC_PATH}"
log_info "Manifest: ${MANIFEST_PATH}"
log_info "Store: ${RFS_S3_STORE_URI}"
safe_execute "${RFS_BIN}" --debug pack -m "${MANIFEST_PATH}" -s "${RFS_S3_STORE_URI}" "${SRC_PATH}"
section "Patching route.url in manifest to S3 read-only endpoint"
rfs_common_build_route_url
rfs_common_patch_flist_route_url "${MANIFEST_PATH}"
if [[ -n "${WEB_ENDPOINT:-}" ]]; then
section "Patching stores to HTTPS web endpoint"
log_info "WEB_ENDPOINT=${WEB_ENDPOINT}"
log_info "KEEP_S3_FALLBACK=${KEEP_S3_FALLBACK:-false}"
rfs_common_patch_flist_stores "${MANIFEST_PATH}" "${WEB_ENDPOINT}" "${KEEP_S3_FALLBACK:-false}"
else
log_warn "WEB_ENDPOINT not set; leaving manifest stores as-is (s3:// only)"
fi
# Optional manifest upload via MinIO client
UPLOAD="${UPLOAD_MANIFESTS:-false}"
if [[ "${ARG_NO_UPLOAD}" == "true" ]]; then
UPLOAD="false"
fi
if [[ "${UPLOAD}" == "true" ]]; then
section "Uploading manifest .fl via MinIO client (mcli/mc)"
if command -v mcli >/dev/null 2>&1; then
MCLI_BIN="mcli"
elif command -v mc >/dev/null 2>&1; then
MCLI_BIN="mc"
else
log_warn "MinIO Client not found (expected mcli or mc); skipping manifest upload"
MCLI_BIN=""
fi
if [[ -n "${MCLI_BIN}" ]]; then
local_subpath="${MANIFESTS_SUBPATH:-manifests}"
safe_execute "${MCLI_BIN}" alias set rfs "${S3_ENDPOINT}" "${S3_ACCESS_KEY}" "${S3_SECRET_KEY}"
dst="rfs/${S3_BUCKET}/${S3_PREFIX%/}/${local_subpath%/}/${MANIFEST_FILE}"
log_info "${MCLI_BIN} cp ${MANIFEST_PATH} ${dst}"
safe_execute "${MCLI_BIN}" cp "${MANIFEST_PATH}" "${dst}"
fi
else
log_info "UPLOAD_MANIFESTS=false; skipping manifest upload"
fi
section "Done"
log_info "Packed: ${MANIFEST_PATH}"