Jan De Landtsheer 6090ce57da initramfs_validate: resolve path and harden existence check
• Resolve input dir to absolute with resolve_path and perform early -d check in [bash.initramfs_validate()](scripts/lib/initramfs.sh:774) to avoid safe_execute aborts on missing paths

• Use plain ls/find logging for sanity snapshot (not safe_execute) so validation reports context even if directory is absent
2025-09-09 11:46:59 +02:00

Zero OS Alpine Initramfs Builder

A comprehensive build system for creating custom Alpine Linux 3.22 x86_64 initramfs with zinit process management, designed for Zero OS deployment.

Features

  • Alpine Linux 3.22 miniroot as base system
  • zinit process manager (complete OpenRC replacement)
  • Rootless containers (Docker/Podman compatible)
  • Rust components with musl targeting (zinit, rfs, mycelium)
  • Aggressive optimization (strip + UPX compression)
  • 2-stage module loading for hardware support
  • GitHub Actions compatible build pipeline
  • Final output: vmlinuz.efi with embedded initramfs.cpio.xz

Quick Start

Prerequisites

Ubuntu/Debian

sudo apt-get update
sudo apt-get install -y \
  build-essential \
  rustc \
  cargo \
  upx-ucl \
  binutils \
  git \
  wget \
  qemu-system-x86 \
  podman

# Add Rust musl target
rustup target add x86_64-unknown-linux-musl
sudo apt-get install -y musl-tools

Alpine Linux

apk add --no-cache \
  build-base \
  rust \
  cargo \
  upx \
  git \
  wget \
  qemu-system-x86 \
  podman

# Add Rust musl target
rustup target add x86_64-unknown-linux-musl

Rootless Container Setup

For rootless Docker/Podman support:

# Configure subuid/subgid (if not already configured)
echo "$(whoami):100000:65536" | sudo tee -a /etc/subuid
echo "$(whoami):100000:65536" | sudo tee -a /etc/subgid

# Verify setup
podman system info

Build

# Clone the repository
git clone <repository-url>
cd zosbuilder

# Make scripts executable
chmod +x scripts/build.sh scripts/clean.sh

# Build initramfs
./scripts/build.sh

# Output will be in dist/
ls -la dist/
# vmlinuz.efi          - Kernel with embedded initramfs
# initramfs.cpio.xz    - Standalone initramfs archive

Project Structure

zosbuilder/
├── config/
│   ├── zinit/              # zinit service definitions
│   │   ├── services/       # individual service files
│   │   └── zinit.conf      # main zinit configuration
│   ├── packages.list       # Alpine packages to install
│   ├── sources.conf        # components to build (ThreeFold)
│   ├── kernel.config       # Linux kernel configuration
│   └── modules.conf        # 2-stage module loading
├── configs/                # existing configurations (migrated)
├── scripts/
│   ├── lib/
│   │   ├── docker.sh       # container management
│   │   ├── alpine.sh       # Alpine operations
│   │   ├── components.sh   # source building
│   │   ├── initramfs.sh    # assembly & optimization
│   │   ├── kernel.sh       # kernel building
│   │   └── testing.sh      # QEMU/cloud-hypervisor
│   ├── build.sh            # main orchestrator
│   └── clean.sh            # cleanup script
├── initramfs/              # build output (generated)
├── components/             # component sources (generated)
├── kernel/                 # kernel source (generated)
└── dist/                   # final artifacts (generated)

Configuration

Component Sources (config/sources.conf)

Define components to download and build:

# Format: TYPE:NAME:URL:VERSION:BUILD_FUNCTION[:EXTRA_OPTIONS]

# Git repositories (Rust components with musl)
git:zinit:https://github.com/threefoldtech/zinit:master:build_zinit
git:mycelium:https://github.com/threefoldtech/mycelium:0.6.1:build_mycelium
git:rfs:https://github.com/threefoldtech/rfs:development:build_rfs

# Pre-built releases
release:corex:https://github.com/threefoldtech/corex/releases/download/2.1.4/corex-2.1.4-amd64-linux-static:2.1.4:install_corex:rename=corex

Package List (config/packages.list)

Alpine packages to install (NO OpenRC):

# Core system
alpine-baselayout
busybox
musl

# Hardware detection & modules
eudev
eudev-hwids
kmod

# Networking
iproute2
ethtool
dhcpcd

# Filesystems
btrfs-progs
dosfstools

# Security & SSH
haveged
openssh-server

# Tools
zellij
tcpdump
bmon

Module Loading (config/modules.conf)

2-stage hardware module loading:

# Stage 1: Critical boot modules
stage1:virtio_net
stage1:virtio_scsi
stage1:virtio_blk
stage1:e1000
stage1:e1000e

# Stage 2: Extended hardware support
stage2:igb
stage2:ixgbe
stage2:i40e
stage2:r8169
stage2:bnx2
stage2:bnx2x

zinit Configuration (config/zinit/)

Main config (config/zinit/zinit.conf)

log_level: debug
init:
  - stage1-modules
  - stage2-modules
  - networking
  - services

Service definitions (config/zinit/services/)

Services are migrated from existing configs/zinit/ directory with proper initialization order.

Build Process

Phase 1: Environment Setup

  1. Create build directories
  2. Install build dependencies
  3. Setup Rust musl target

Phase 2: Alpine Base

  1. Download Alpine 3.22 miniroot
  2. Extract to initramfs directory
  3. Install packages from config/packages.list
  4. NO OpenRC installation

Phase 3: Component Building

  1. Parse config/sources.conf
  2. Download/clone sources to components/
  3. Build Rust components with musl:
    • zinit: Standard cargo build
    • rfs: Standard cargo build
    • mycelium: Build in myceliumd/ subdirectory
  4. Install binaries to initramfs

Phase 4: System Configuration

  1. Replace /sbin/init with zinit
  2. Copy zinit configuration
  3. Setup 2-stage module loading
  4. Configure system services

Phase 5: Optimization

  1. Aggressive cleanup:
    • Remove docs, man pages, locales
    • Remove headers, development files
    • Remove APK cache
  2. Binary optimization:
    • Strip all executables and libraries
    • UPX compress all binaries
  3. Size verification

Phase 6: Packaging

  1. Create initramfs.cpio.xz with XZ compression
  2. Build kernel with embedded initramfs
  3. Generate vmlinuz.efi

Testing

QEMU Testing

# Boot test with QEMU
./scripts/test.sh --qemu

# With serial console
./scripts/test.sh --qemu --serial

cloud-hypervisor Testing

# Boot test with cloud-hypervisor
./scripts/test.sh --cloud-hypervisor

Custom Testing

# Manual QEMU command
qemu-system-x86_64 \
  -kernel dist/vmlinuz.efi \
  -m 512M \
  -nographic \
  -serial mon:stdio \
  -append "console=ttyS0,115200 console=tty1 loglevel=7"

Size Optimization

The build system achieves minimal size through:

Package Selection

  • Minimal Alpine packages (~50MB target)
  • No OpenRC or systemd
  • Essential tools only

Binary Optimization

  • strip: Remove debug symbols
  • UPX: Maximum compression
  • musl static linking: No runtime dependencies

Filesystem Cleanup

  • Remove documentation
  • Remove locales (except C)
  • Remove development headers
  • Remove package manager cache

Expected Sizes

  • Base Alpine: ~5MB
  • With packages: ~25MB
  • With components: ~40MB
  • After optimization: ~15-20MB
  • Final initramfs.cpio.xz: ~8-12MB

GitHub Actions Integration

See GITHUB_ACTIONS.md for complete CI/CD setup.

Basic Workflow

name: Build Zero OS
on: [push, pull_request]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    - name: Setup rootless containers
      run: |
        echo "runner:100000:65536" | sudo tee -a /etc/subuid
        echo "runner:100000:65536" | sudo tee -a /etc/subgid
    - name: Build
      run: ./scripts/build.sh
    - name: Test
      run: ./scripts/test.sh --qemu

Advanced Usage

Custom Components

Add custom components to config/sources.conf:

# Custom Git component
git:myapp:https://github.com/user/myapp:v1.0:build_myapp

# Custom release
release:mytool:https://releases.example.com/mytool-x86_64:v2.0:install_mytool

Implement build function in scripts/lib/components.sh:

function build_myapp() {
    local name="$1"
    local component_dir="$2"
    
    # Custom build logic
    export RUSTFLAGS="-C target-feature=+crt-static"
    cargo build --release --target x86_64-unknown-linux-musl
    
    # Install binary
    cp target/x86_64-unknown-linux-musl/release/myapp "${INSTALL_DIR}/usr/bin/"
}

Container Builds

Build in isolated container:

# Build container image
podman build -t zero-os-builder .

# Run build in container
podman run --rm \
  -v $(pwd):/workspace \
  -w /workspace \
  zero-os-builder \
  ./scripts/build.sh

Cross-Platform Support

The build system supports multiple architectures:

# Build for different targets
export RUST_TARGET="aarch64-unknown-linux-musl"
export ALPINE_ARCH="aarch64"
./scripts/build.sh

Troubleshooting

Common Issues

Build Failures

# Clean and retry
./scripts/clean.sh
./scripts/build.sh

# Check dependencies
./scripts/build.sh --check-deps

Container Issues

# Verify rootless setup
podman system info

# Reset user namespace
podman system reset

Rust Build Issues

# Verify musl target
rustup target list --installed | grep musl

# Add if missing
rustup target add x86_64-unknown-linux-musl

Debug Mode

# Enable verbose output
export DEBUG=1
./scripts/build.sh

Size Analysis

# Analyze initramfs contents
./scripts/analyze-size.sh

# Show largest files
find initramfs/ -type f -exec du -h {} \; | sort -rh | head -20

Contributing

  1. Fork the repository
  2. Create feature branch
  3. Test thoroughly with both QEMU and cloud-hypervisor
  4. Ensure size optimization targets are met
  5. Submit pull request with detailed description

Development Workflow

# Setup development environment
./scripts/setup-dev.sh

# Run tests
./scripts/test.sh --all

# Check size impact
./scripts/analyze-size.sh --compare

License

[License information]

Support

  • Issues: GitHub Issues
  • Documentation: See docs/ directory
  • Examples: See examples/ directory
Description
dockerized/podman-ized build system for zos, based on Alpine pakaging system, zinit and zos-specific binaries
Readme 147 MiB
Languages
Shell 99.5%
Dockerfile 0.5%