No description
Find a file
Jan De Landtsheer 0c022c4632
Some checks failed
Build MOS Initramfs / build (push) Has been cancelled
fix: compile btrfs and ext4 into kernel
Change from modules (=m) to built-in (=y) to avoid module loading
race conditions during early boot sysvol mount.
2026-01-28 14:41:27 +01:00
.github/workflows some iterations for gitea runners 2025-12-06 00:08:01 +01:00
config fix: compile btrfs and ext4 into kernel 2026-01-28 14:41:27 +01:00
docs chore: update team keys and add dhcpcd reference docs 2026-01-14 10:17:12 +01:00
scripts fix: improve account unlock, health checks, and zinit build 2026-01-27 18:16:10 +01:00
.gitignore feat: add GEOMIND_PASSWORD env var for optional password setup 2026-01-14 17:30:53 +01:00
AGENTS.md initramfs+modules: robust copy aliasing, curated stage1 + PHYs, firmware policy via firmware.conf, runtime readiness, build ID; docs sync 2025-09-23 14:03:01 +02:00
claude.md build: simplify CI workflow for Gitea runners 2025-12-06 00:05:19 +01:00
Dockerfile feat: add finalize.d hook system for initramfs customization 2025-12-03 14:50:35 +01:00
GITHUB_ACTIONS.md fix: use OS_NAME for kernel version suffix instead of parsing kernel.config 2025-12-03 11:40:40 +01:00
IMPLEMENTATION_PLAN.md refactor: remove misleading stage1/stage2 module staging concept 2025-12-05 23:42:53 +01:00
README.md docs: clarify host prerequisites, remove TODO 2026-01-27 18:24:35 +01:00
runit.sh chore: remove legacy yaml configs and init scripts 2026-01-14 10:16:32 +01:00

MOS Alpine Initramfs Builder

A comprehensive build system for creating custom Alpine Linux 3.22 x86_64 initramfs with zinit process management, designed for MOS deployment.

Features

  • Alpine Linux 3.22 miniroot as base system
  • zinit process manager (complete OpenRC replacement)
  • Rootless containers (Docker/Podman compatible)
  • Rust components with musl targeting (zinit, rfs, mycelium, mosstorage)
  • Aggressive optimization (strip + UPX compression)
  • Dynamic firmware discovery via WHENCE + modinfo
  • GitHub Actions compatible build pipeline
  • Final output: vmlinuz.efi with embedded initramfs.cpio.xz

Quick Start

Prerequisites

The build runs entirely inside a container - you only need a container runtime on the host.

Host Requirements (for building)

Tool Required Purpose
podman or docker Yes Container runtime for builds
git Yes Clone and manage repository

Host Requirements (for testing)

Tool QEMU cloud-hypervisor Purpose
qemu-system-x86_64 Yes - VM hypervisor
cloud-hypervisor - Yes VM hypervisor
qemu-img Yes Yes Create disk images
screen - Yes Console attachment
curl, jq - Yes cloud-hypervisor API
sudo, ip Optional Yes TAP/bridge networking

Ubuntu/Debian

# Minimal (build only)
sudo apt-get update
sudo apt-get install -y git podman

# With QEMU testing
sudo apt-get install -y git podman qemu-system-x86 qemu-utils

# With cloud-hypervisor testing
sudo apt-get install -y git podman qemu-utils screen curl jq iproute2
# Install cloud-hypervisor from releases: https://github.com/cloud-hypervisor/cloud-hypervisor/releases

Arch Linux

# Minimal (build only)
sudo pacman -S git podman

# With QEMU testing
sudo pacman -S git podman qemu-full

# With cloud-hypervisor testing
sudo pacman -S git podman qemu-img screen curl jq iproute2 cloud-hypervisor

Alpine Linux

# Minimal (build only)
apk add --no-cache git podman

# With QEMU testing
apk add --no-cache git podman qemu-system-x86_64 qemu-img

# With cloud-hypervisor testing
apk add --no-cache git podman qemu-img screen curl jq iproute2
# Install cloud-hypervisor from releases

Rootless Container Setup

For rootless Podman support (recommended):

# Configure subuid/subgid (if not already configured)
echo "$(whoami):100000:65536" | sudo tee -a /etc/subuid
echo "$(whoami):100000:65536" | sudo tee -a /etc/subgid

# Verify setup
podman system info

Build

# Clone the repository
git clone <repository-url>
cd mos_builder

# Make scripts executable
chmod +x scripts/build.sh scripts/clean.sh

# Build initramfs
./scripts/build.sh

# Output will be in dist/
ls -la dist/
# vmlinuz.efi          - Kernel with embedded initramfs
# initramfs.cpio.xz    - Standalone initramfs archive

Project Structure

mos_builder/
├── config/
│   ├── zinit/              # zinit service definitions
│   │   ├── services/       # individual service files
│   │   └── zinit.conf      # main zinit configuration
│   ├── packages.list       # Alpine packages to install
│   ├── sources.conf        # components to build (ThreeFold)
│   ├── kernel.config       # Linux kernel configuration
│   └── modules.conf        # kernel modules for boot
├── configs/                # existing configurations (migrated)
├── scripts/
│   ├── lib/
│   │   ├── docker.sh       # container management
│   │   ├── alpine.sh       # Alpine operations
│   │   ├── components.sh   # source building
│   │   ├── initramfs.sh    # assembly & optimization
│   │   └── kernel.sh       # kernel building
│   ├── build.sh            # main orchestrator
│   └── clean.sh            # cleanup script
├── initramfs/              # build output (generated)
├── components/             # component sources (generated)
├── kernel/                 # kernel source (generated)
└── dist/                   # final artifacts (generated)

Configuration

Component Sources (config/sources.conf)

Define components to download and build:

# Format: TYPE:NAME:URL:VERSION:BUILD_FUNCTION[:EXTRA_OPTIONS]

# Git repositories (Rust components with musl)
git:zinit:https://github.com/threefoldtech/zinit:master:build_zinit
git:mycelium:https://github.com/threefoldtech/mycelium:0.6.1:build_mycelium
git:rfs:https://github.com/threefoldtech/rfs:development:build_rfs

# Pre-built releases
release:corex:https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static:2.1.4:install_corex:rename=corex

Package List (config/packages.list)

Alpine packages to install (NO OpenRC):

# Core system
alpine-baselayout
busybox
musl

# Hardware detection & modules
eudev
eudev-hwids
kmod

# Networking
iproute2
ethtool
dhcpcd

# Filesystems
btrfs-progs
dosfstools

# Security & SSH
haveged
openssh-server

# Tools
zellij
tcpdump
bmon

Module Loading (config/modules.conf)

Kernel modules copied to initramfs for boot (one per line):

# Module loading specification for MOS Alpine initramfs
# Format: MODULE_NAME (one per line, comments start with #)
#
# These modules are copied to the initramfs for minimal boot support.
# After network is up, full firmware and modules are RFS-mounted at runtime.

# Storage
libata
ahci
nvme

# Networking
virtio_net
e1000
e1000e
igb
ixgbe

# USB/HID
xhci_pci
usbhid

zinit Configuration (config/zinit/)

Main config (config/zinit/zinit.conf)

log_level: debug
init:
  - load-modules
  - networking
  - services

Service definitions (config/zinit/services/)

Services are migrated from existing configs/zinit/ directory with proper initialization order.

Build Process

Phase 1: Environment Setup

  1. Create build directories
  2. Install build dependencies
  3. Setup Rust musl target

Phase 2: Alpine Base

  1. Download Alpine 3.22 miniroot
  2. Extract to initramfs directory
  3. Install packages from config/packages.list
  4. NO OpenRC installation

Phase 3: Component Building

  1. Parse config/sources.conf
  2. Download/clone sources to components/
  3. Build Rust components with musl:
    • zinit: Standard cargo build
    • rfs: Standard cargo build
    • mycelium: Build in myceliumd/ subdirectory
    • mosstorage: Build from the storage orchestration component for MOS
  4. Install binaries to initramfs

Phase 4: System Configuration

  1. Replace /sbin/init with zinit
  2. Copy zinit configuration
  3. Setup module loading with dependency resolution
  4. Configure system services

Phase 5: Optimization

  1. Aggressive cleanup:
    • Remove docs, man pages, locales
    • Remove headers, development files
    • Remove APK cache
  2. Binary optimization:
    • Strip all executables and libraries
    • UPX compress all binaries
  3. Size verification

Phase 6: Packaging

  1. Create initramfs.cpio.xz with XZ compression
  2. Build kernel with embedded initramfs
  3. Generate vmlinuz.efi (default kernel)
  4. Generate versioned kernel: vmlinuz-{VERSION}-{ZINIT_HASH}.efi
  5. Optionally upload versioned kernel to S3 (set UPLOAD_KERNEL=true)

Testing

QEMU Testing

# Boot test with QEMU (default)
./runit.sh

# With custom parameters
./runit.sh --hypervisor qemu --memory 2048 --disks 3

cloud-hypervisor Testing

# Boot test with cloud-hypervisor
./runit.sh --hypervisor ch

# With disk reset
./runit.sh --hypervisor ch --reset --disks 5

Advanced Options

# See all options
./runit.sh --help

# Custom disk size and bridge
./runit.sh --disk-size 20G --bridge mosbr --disks 4

# Environment variables
HYPERVISOR=ch NUM_DISKS=5 ./runit.sh

Size Optimization

The build system achieves minimal size through:

Package Selection

  • Minimal Alpine packages (~50MB target)
  • No OpenRC or systemd
  • Essential tools only

Binary Optimization

  • strip: Remove debug symbols
  • UPX: Maximum compression
  • musl static linking: No runtime dependencies

Filesystem Cleanup

  • Remove documentation
  • Remove locales (except C)
  • Remove development headers
  • Remove package manager cache

Expected Sizes

  • Base Alpine: ~5MB
  • With packages: ~25MB
  • With components: ~40MB
  • After optimization: ~15-20MB
  • Final initramfs.cpio.xz: ~8-12MB

GitHub Actions Integration

See GITHUB_ACTIONS.md for complete CI/CD setup.

Basic Workflow

name: Build Mycelium OS
on: [push, pull_request]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    - name: Setup rootless containers
      run: |
        echo "runner:100000:65536" | sudo tee -a /etc/subuid
        echo "runner:100000:65536" | sudo tee -a /etc/subgid
    - name: Build
      run: ./scripts/build.sh
    - name: Test
      run: ./runit.sh --hypervisor qemu

Advanced Usage

Custom Components

Add custom components to config/sources.conf:

# Custom Git component
git:myapp:https://github.com/user/myapp:v1.0:build_myapp

# Custom release
release:mytool:https://releases.example.com/mytool-x86_64:v2.0:install_mytool

Implement build function in scripts/lib/components.sh:

function build_myapp() {
    local name="$1"
    local component_dir="$2"
    
    # Custom build logic
    export RUSTFLAGS="-C target-feature=+crt-static"
    cargo build --release --target x86_64-unknown-linux-musl
    
    # Install binary
    cp target/x86_64-unknown-linux-musl/release/myapp "${INSTALL_DIR}/usr/bin/"
}

S3 Uploads (Kernel & RFS Flists)

Automatically upload build artifacts to S3-compatible storage:

Configuration

Create config/rfs.conf:

S3_ENDPOINT="https://s3.example.com:9000"
S3_REGION="us-east-1"
S3_BUCKET="mos"
S3_PREFIX="flists/mos_builder"
S3_ACCESS_KEY="YOUR_ACCESS_KEY"
S3_SECRET_KEY="YOUR_SECRET_KEY"

Upload Kernel

# Enable kernel upload
UPLOAD_KERNEL=true ./scripts/build.sh

# Custom kernel subpath (default: kernel)
KERNEL_SUBPATH=kernels UPLOAD_KERNEL=true ./scripts/build.sh

Uploaded files:

  • s3://{bucket}/{prefix}/kernel/vmlinuz-{VERSION}-{ZINIT_HASH}.efi - Versioned kernel
  • s3://{bucket}/{prefix}/kernel/kernels.txt - Text index (one kernel per line)
  • s3://{bucket}/{prefix}/kernel/kernels.json - JSON index with metadata

Index files: The build automatically generates and uploads index files listing all available kernels in the S3 bucket. This enables:

  • Easy kernel selection in web UIs (dropdown menus)
  • Programmatic access without S3 API listing
  • Metadata like upload timestamp and kernel count (JSON format)

JSON index format:

{
  "kernels": [
    "vmlinuz-6.12.44-MyceliumOS-abc1234.efi",
    "vmlinuz-6.12.44-MyceliumOS-def5678.efi"
  ],
  "updated": "2025-01-04T12:00:00Z",
  "count": 2
}

Upload RFS Flists

# Enable flist uploads
UPLOAD_MANIFESTS=true ./scripts/build.sh

Uploaded as:

  • s3://{bucket}/{prefix}/manifests/modules-{VERSION}.fl
  • s3://{bucket}/{prefix}/manifests/firmware-{TAG}.fl

Requirements

  • MinIO Client (mcli or mc) must be installed
  • Valid S3 credentials in config/rfs.conf

Container Builds

Build in isolated container:

# Build container image
podman build -t mos-builder .

# Run build in container
podman run --rm \
  -v $(pwd):/workspace \
  -w /workspace \
  mos-builder \
  ./scripts/build.sh

Cross-Platform Support (totally untestd)

The build system supports multiple architectures:

# Build for different targets
export RUST_TARGET="aarch64-unknown-linux-musl"
export ALPINE_ARCH="aarch64"
./scripts/build.sh

Troubleshooting

Common Issues

Build Failures

# Clean and retry
./scripts/clean.sh
./scripts/build.sh

# Check dependencies
./scripts/build.sh --check-deps

Container Issues

# Verify rootless setup
podman system info

# Reset user namespace
podman system reset

Rust Build Issues (inside dev container)

# Enter dev container first
./scripts/dev-container.sh shell

# Verify musl target
rustup target list --installed | grep musl

# Add if missing
rustup target add x86_64-unknown-linux-musl

Note: Rust toolchain issues are rare since the container image includes everything needed.

Debug Mode

# Enable verbose output
export DEBUG=1
./scripts/build.sh

Contributing

  1. Fork the repository
  2. Create feature branch
  3. Test thoroughly with both QEMU and cloud-hypervisor
  4. Ensure size optimization targets are met
  5. Submit pull request with detailed description

Development Workflow

# Setup development environment
./scripts/dev-container.sh start

# Run incremental build
./scripts/build.sh

# Test with QEMU
./runit.sh --hypervisor qemu

# Test with cloud-hypervisor
./runit.sh --hypervisor ch

License

[License information]

Support

  • Issues: GitHub Issues
  • Documentation: See docs/ directory
  • Examples: See examples/ directory