No description
  • Shell 99.3%
  • Dockerfile 0.7%
Find a file
Timur Gordon 26fd16d696
Some checks failed
Tests / Beacon Integration Tests (push) Failing after 1s
Tests / BATS Unit Tests (push) Successful in 16s
revert: remove hero_proc from image — will be installed remotely
hero_proc should not be baked into the installer image. Instead,
znzfreezone_ops will install it remotely via SSH after a node
registers via call-home.

This reverts commit 5fbed61.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 12:46:40 +02:00
.forgejo/workflows feat: integrate zinit, mycelium services, call-home, and test suite 2026-03-02 10:47:15 +01:00
builder revert: remove hero_proc from image — will be installed remotely 2026-04-01 12:46:40 +02:00
config revert: remove hero_proc from image — will be installed remotely 2026-04-01 12:46:40 +02:00
installer fix: hardware boot fixes and installer improvements 2026-02-18 21:18:59 +01:00
test revert: remove hero_proc from image — will be installed remotely 2026-04-01 12:46:40 +02:00
.gitignore feat: replace syslinux raw image with GRUB hybrid BIOS+UEFI ISO 2026-02-18 12:59:36 +01:00
build-container.sh fix: use --net host for podman (pasta needs >= 5.3 for rootful) 2026-02-19 17:57:30 +01:00
Containerfile feat: replace syslinux raw image with GRUB hybrid BIOS+UEFI ISO 2026-02-18 12:59:36 +01:00
README.md revert: remove hero_proc from image — will be installed remotely 2026-04-01 12:46:40 +02:00
ubuntu-installer-prd.md Initial Ralph project setup 2026-02-14 10:23:24 +01:00

gubuntu-installer

Fully automated, offline-capable Ubuntu 24.04 LTS installation system. A containerized build system produces a bootable hybrid BIOS+UEFI ISO containing an Alpine Linux live environment with a pre-built Ubuntu rootfs and installation script.

Philosophy: No wizards, no questions beyond essential parameters, no network dependency at install time. Insert USB, boot, ./install, done.

Prerequisites

  • Podman or Docker (podman preferred)
  • QEMU with KVM for local VM testing (qemu-system-x86_64, qemu-img)
  • OVMF for UEFI testing (e.g. edk2-ovmf on Arch, ovmf on Ubuntu)
  • ~10 GB free disk space for the build

Quick Start

1. Build

# Auto-detects podman or docker, handles sudo escalation
./build-container.sh

# Or specify runtime explicitly
./build-container.sh --runtime podman
./build-container.sh --runtime docker

# Rebuild from scratch (no layer cache)
./build-container.sh --no-cache

The build requires root for chroot bind mounts. When using podman as a non-root user, the script automatically escalates with sudo.

Output goes to output/:

  • ubuntu-installer.iso — hybrid BIOS+UEFI ISO (boot as CD-ROM or dd to USB)
  • SHA256SUMS — checksums

2. Test in a local VM

The quickest way to test is with QEMU using user-mode networking. SSH is forwarded to localhost:2222.

# Boot the installer ISO in UEFI mode (creates 20G test disks automatically)
./test/test-qemu.sh --mode uefi --net user

The ISO boots as a CD-ROM (sr0). Target disks appear as vda, vdb, etc. At the Alpine prompt, run the installer:

./install.sh --hostname testbox --net dhcp --disk /dev/vda

After installation completes, shut down the VM (poweroff), then boot from the installed disk to verify:

# Boot the installed system (just the target disk, no installer)
qemu-system-x86_64 -enable-kvm -m 2048 -smp 2 \
    -drive file=test-disk-1.qcow2,format=qcow2,if=virtio \
    -bios /usr/share/edk2/x64/OVMF_CODE.fd \
    -netdev user,id=net0,hostfwd=tcp::2222-:22 \
    -device virtio-net-pci,netdev=net0 \
    -nographic -serial mon:stdio

SSH into the installed system:

ssh -p 2222 mycelium@localhost

BIOS mode

./test/test-qemu.sh --mode bios --net user

Bridge networking

For bridge networking (VM gets a real IP on your network):

# Create a Linux bridge with an IP for the host side
./test/setup-test-bridge.sh create-linux br0 --ip 192.168.100.1/24

# Boot with the VM attached to the bridge
./test/test-qemu.sh --mode uefi --net bridge,name=br0

# Clean up when done
./test/setup-test-bridge.sh delete-linux br0

Cloud Hypervisor (UEFI only)

./test/test-cloud-hypervisor.sh --net bridge,name=br0

Cloud Hypervisor has no CD-ROM device, so it boots the hybrid ISO as a raw disk.

Test script options

Both test-qemu.sh and test-cloud-hypervisor.sh accept:

Option Default Description
--image <path> output/ubuntu-installer.iso Installer ISO
--disks <n> 2 Number of target disks (QEMU only)
--disk-size <size> 20G Target disk size
--mem <MB> 4096 / 2048 Memory
--cpus <n> 2 CPUs
--net <config> user / none Network (see below)
--serial off Serial console instead of VGA (QEMU only)

Network configs: user (QEMU only, SSH on port 2222), tap,ifname=<name>, bridge,name=<br>, ovs,name=<br>, none.

Use Ctrl-A X to exit the QEMU console.

3. Write to USB (for real hardware)

sudo ./builder/write-to-usb.sh /dev/sdX

The hybrid ISO is dd-compatible — writing it to USB works identically to writing a raw image. Includes safety checks: confirms the device, refuses to write to system disks.

4. Install (on target machine)

Boot the USB, then:

# DHCP
./install.sh --hostname myserver --net dhcp

# Static IP
./install.sh --hostname myserver --net static,ip=192.168.1.100/24,gw=192.168.1.1,dns=8.8.8.8

# RAID1 on two disks
./install.sh --hostname myserver --net dhcp --raid raid1,/dev/sda,/dev/sdb

Unit Tests

bats test/*.bats

Configuration

File Purpose
config/build.conf Ubuntu version, architecture, default user password, compression, output filename
config/packages.list Packages installed into the rootfs
config/ssh-keys.list SSH keys fetched at build time (GitHub and Forgejo)

Build Dependencies

The container provides all build tools. Key dependencies:

Package Purpose
debootstrap Bootstrap Ubuntu rootfs
squashfs-tools Compress rootfs into squashfs
grub-pc-bin GRUB BIOS boot images (cdboot.img, boot_hybrid.img)
grub-efi-amd64-bin GRUB UEFI modules for grub-mkstandalone
xorriso Create hybrid ISO with dual El Torito entries
mtools Build FAT EFI System Partition image without mounting
cpio Repack initramfs with embedded squashfs
dosfstools Format FAT filesystem for EFI image

Directory Structure

├── build-container.sh            # Container build wrapper (podman/docker)
├── Containerfile                 # Container image for builder
├── builder/
│   ├── build.sh                  # Main build script
│   ├── write-to-usb.sh           # Write ISO to USB
│   └── lib/
│       ├── common.sh             # Shared utilities
│       ├── rootfs.sh             # Rootfs creation functions
│       ├── alpine.sh             # Alpine image + ISO creation
│       └── ssh-keys.sh           # SSH key fetching
├── config/
│   ├── build.conf                # Build configuration
│   ├── packages.list             # Packages to install
│   ├── ssh-keys.list             # SSH key sources
│   ├── my_init.service           # systemd unit for my_init supervisor
│   └── my_init/                  # my_init process supervisor services
│       ├── install-mycelium.toml # Download mycelium binary (oneshot)
│       ├── mycelium.toml         # Mycelium IPv6 overlay daemon
│       ├── call-home.toml        # Node registration with beacon
│       └── call-home.sh          # Beacon registration helper script
├── installer/
│   ├── install.sh                # Installation script (embedded in image)
│   └── lib/
│       ├── common.sh             # Shared utilities
│       ├── disk.sh               # Partitioning functions
│       ├── network.sh            # Network configuration
│       └── bootloader.sh         # GRUB installation
├── output/                       # Build output (gitignored)
└── test/
    ├── *.bats                    # Unit tests (bats)
    ├── test-qemu.sh              # QEMU test runner
    ├── test-cloud-hypervisor.sh  # Cloud Hypervisor test runner
    └── setup-test-bridge.sh      # Bridge setup helper

Installed System

  • User: mycelium (password set at build time via config/build.conf)
  • SSH: Key-only authentication, root login disabled
  • Sudo: Passwordless for mycelium
  • Network: systemd-networkd (DHCP or static, no netplan)
  • Filesystem: btrfs with zstd compression, subvolumes: @, @home, @snapshots, @var_log, @var_tmp
  • Process Supervisor: my_init manages services after boot

Process Supervisor (my_init)

The installed system uses my_init as a lightweight process supervisor (started via systemd). On first boot the following service chain runs:

  1. install-mycelium (oneshot) — downloads the mycelium binary if not already present
  2. mycelium (daemon) — starts the IPv6 overlay network, connects to bootstrap peers
  3. call-home (oneshot) — registers the node with the beacon server (hostname, pubkey, subnet)

After call-home completes, the node is discoverable via the beacon and reachable over the mycelium overlay network.