Files
zosbuilder/scripts/lib/docker.sh
Jan De Landtsheer ad0a06e267
Some checks failed
Build Zero OS Initramfs / build (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, basic) (push) Has been cancelled
Build Zero OS Initramfs / test-matrix (qemu, serial) (push) Has been cancelled
initramfs+modules: robust copy aliasing, curated stage1 + PHYs, firmware policy via firmware.conf, runtime readiness, build ID; docs sync
Summary of changes (with references):\n\nModules + PHY coverage\n- Curated and normalized stage1 list in [config.modules.conf](config/modules.conf:1):\n  - Boot-critical storage, core virtio, common NICs (Intel/Realtek/Broadcom), overlay/fuse, USB HCD/HID.\n  - Added PHY drivers required by NIC MACs:\n    * realtek (for r8169, etc.)\n    * broadcom families: broadcom, bcm7xxx, bcm87xx, bcm_phy_lib, bcm_phy_ptp\n- Robust underscore↔hyphen aliasing during copy so e.g. xhci_pci → xhci-pci.ko, hid_generic → hid-generic.ko:\n  - [bash.initramfs_copy_resolved_modules()](scripts/lib/initramfs.sh:990)\n\nFirmware policy and coverage\n- Firmware selection now authoritative via [config/firmware.conf](config/firmware.conf:1); ignore modules.conf firmware hints:\n  - [bash.initramfs_setup_modules()](scripts/lib/initramfs.sh:229)\n  - Count from firmware.conf for reporting; remove stale required-firmware.list.\n- Expanded NIC firmware set (bnx2, bnx2x, tigon, intel, realtek, rtl_nic, qlogic, e100) in [config.firmware.conf](config/firmware.conf:1).\n- Installer enforces firmware.conf source-of-truth in [bash.alpine_install_firmware()](scripts/lib/alpine.sh:392).\n\nEarly input & build freshness\n- Write a runtime build stamp to /etc/zero-os-build-id for embedded initramfs verification:\n  - [bash.initramfs_finalize_customization()](scripts/lib/initramfs.sh:568)\n- Minor init refinements in [config.init](config/init:1) (ensures /home, consistent depmod path).\n\nRebuild helper improvements\n- [scripts/rebuild-after-zinit.sh](scripts/rebuild-after-zinit.sh:1):\n  - Added --verify-only; container-aware execution; selective marker clears only.\n  - Prints stage status before/after; avoids --rebuild-from; resolves full kernel version for diagnostics.\n\nRemote flist readiness + zinit\n- Init scripts now probe BASE_URL readiness and accept FLISTS_BASE_URL/FLIST_BASE_URL; firmware target is /lib/firmware:\n  - [sh.firmware.sh](config/zinit/init/firmware.sh:1)\n  - [sh.modules.sh](config/zinit/init/modules.sh:1)\n\nContainer, docs, and utilities\n- Stream container build logs by calling runtime build directly in [bash.docker_build_container()](scripts/lib/docker.sh:56).\n- Docs updated to reflect firmware policy, runtime readiness, rebuild helper, early input, and GRUB USB:\n  - [docs.NOTES.md](docs/NOTES.md)\n  - [docs.PROMPT.md](docs/PROMPT.md)\n  - [docs.review-rfs-integration.md](docs/review-rfs-integration.md)\n- Added GRUB USB creator (referenced in docs): [scripts/make-grub-usb.sh](scripts/make-grub-usb.sh)\n\nCleanup\n- Removed legacy/duplicated config trees under configs/ and config/zinit.old/.\n- Minor newline and ignore fixes: [.gitignore](.gitignore:1)\n\nNet effect\n- Runtime now has correct USB HCDs/HID-generic and NIC+PHY coverage (Realtek/Broadcom), with matching firmware installed in initramfs.\n- Rebuild workflow is minimal and host/container-aware; docs are aligned with implemented behavior.\n
2025-09-23 14:03:01 +02:00

316 lines
9.8 KiB
Bash

#!/bin/bash
# Container management for rootless Docker/Podman builds
# Source common functions
LIB_SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${LIB_SCRIPT_DIR}/common.sh"
# Container configuration
CONTAINER_RUNTIME=""
BUILDER_IMAGE="zero-os-builder:latest"
ALPINE_VERSION="${ALPINE_VERSION:-3.22}"
# Detect available container runtime
function docker_detect_runtime() {
section_header "Detecting Container Runtime"
if command_exists "podman"; then
CONTAINER_RUNTIME="podman"
log_info "Using Podman as container runtime"
elif command_exists "docker"; then
CONTAINER_RUNTIME="docker"
log_info "Using Docker as container runtime"
else
log_error "No container runtime found (podman or docker required)"
return 1
fi
# Check if rootless setup is working
docker_verify_rootless
}
# Verify rootless container setup
function docker_verify_rootless() {
section_header "Verifying Rootless Container Setup"
log_info "Checking ${CONTAINER_RUNTIME} rootless configuration"
safe_execute ${CONTAINER_RUNTIME} system info
# Test basic rootless functionality
log_info "Testing rootless container execution"
safe_execute ${CONTAINER_RUNTIME} run --rm alpine:${ALPINE_VERSION} echo "Rootless container test successful"
log_info "Rootless container setup verified"
}
# Build container image with build tools
function docker_build_container() {
local dockerfile_path="${1:-${PROJECT_ROOT}/Dockerfile}"
local tag="${2:-${BUILDER_IMAGE}}"
section_header "Building Container Image"
# Create Dockerfile if it doesn't exist
if [[ ! -f "$dockerfile_path" ]]; then
docker_create_dockerfile "$dockerfile_path"
fi
log_info "Building container image: ${tag}"
${CONTAINER_RUNTIME} build -t "${tag}" -f "${dockerfile_path}" "${PROJECT_ROOT}"
log_info "Container image built successfully: ${tag}"
}
# Create optimized Dockerfile for build environment
function docker_create_dockerfile() {
local dockerfile_path="$1"
section_header "Creating Dockerfile"
cat > "$dockerfile_path" << 'EOF'
FROM alpine:3.22
# Install build dependencies
RUN apk add --no-cache \
build-base \
rust \
cargo \
upx \
git \
wget \
tar \
gzip \
xz \
cpio \
binutils \
linux-headers \
musl-dev \
pkgconfig \
openssl-dev
# Add Rust musl target
RUN rustup target add x86_64-unknown-linux-musl
# Create non-root user for builds
RUN adduser -D -s /bin/sh builder && \
chown -R builder:builder /home/builder
# Set working directory
WORKDIR /workspace
# Switch to non-root user
USER builder
# Set environment variables for static linking
ENV RUSTFLAGS="-C target-feature=+crt-static"
ENV CC_x86_64_unknown_linux_musl="musl-gcc"
ENV CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_LINKER="musl-gcc"
CMD ["/bin/sh"]
EOF
log_info "Created Dockerfile: ${dockerfile_path}"
}
# Start rootless container for building
function docker_start_rootless() {
local image="${1:-${BUILDER_IMAGE}}"
local workdir="${2:-/workspace}"
local command="${3:-/bin/sh}"
section_header "Starting Rootless Container"
# Setup volume mounts
local user_args="--user $(id -u):$(id -g)"
local volume_args="-v ${PROJECT_ROOT}:${workdir}"
local env_args=""
# Pass through environment variables
local env_vars=(
"DEBUG"
"ALPINE_VERSION"
"KERNEL_VERSION"
"RUST_TARGET"
"OPTIMIZATION_LEVEL"
)
for var in "${env_vars[@]}"; do
if [[ -n "${!var:-}" ]]; then
env_args="${env_args} -e ${var}=${!var}"
fi
done
log_info "Starting container with rootless privileges"
safe_execute ${CONTAINER_RUNTIME} run --rm -it \
${user_args} \
${volume_args} \
${env_args} \
-w "${workdir}" \
"${image}" \
${command}
}
# Run build command in container
function docker_run_build() {
local build_command="${1:-./scripts/build.sh}"
local image="${2:-${BUILDER_IMAGE}}"
section_header "Running Build in Container"
# Extract script path from command (first part before any arguments)
local script_path=$(echo "$build_command" | cut -d' ' -f1)
# Ensure build script is executable
safe_execute chmod +x "${PROJECT_ROOT}/${script_path}"
log_info "Executing build command in container: ${build_command}"
# Pass through environment variables for proper logging
local env_args=""
local env_vars=(
"DEBUG"
"ALPINE_VERSION"
"KERNEL_VERSION"
"RUST_TARGET"
"OPTIMIZATION_LEVEL"
)
for var in "${env_vars[@]}"; do
if [[ -n "${!var:-}" ]]; then
env_args="${env_args} -e ${var}=${!var}"
fi
done
# Run with privileged access for chroot mounts and system operations
log_info "Container environment: ${env_args}"
${CONTAINER_RUNTIME} run --rm -it \
--privileged \
${env_args} \
-v "${PROJECT_ROOT}:/workspace" \
-w /workspace \
"${image}" \
${build_command}
}
# Commit container state for reuse
function docker_commit_builder() {
local container_id="$1"
local new_tag="${2:-${BUILDER_IMAGE}-cached}"
section_header "Committing Builder Container"
log_info "Committing container ${container_id} as ${new_tag}"
safe_execute ${CONTAINER_RUNTIME} commit "${container_id}" "${new_tag}"
log_info "Container committed successfully: ${new_tag}"
}
# Clean up container images and running containers
function docker_cleanup() {
local keep_builder="${1:-false}"
section_header "Cleaning Up Containers and Images"
if [[ "$keep_builder" != "true" ]]; then
log_info "Cleaning up builder containers and images"
# Stop and remove any containers using the builder image
local containers_using_image=$(${CONTAINER_RUNTIME} ps -a --filter "ancestor=${BUILDER_IMAGE}" --format "{{.ID}}" 2>/dev/null || true)
if [[ -n "$containers_using_image" ]]; then
log_info "Stopping containers using builder image"
for container_id in $containers_using_image; do
log_info "Stopping container: $container_id"
${CONTAINER_RUNTIME} stop "$container_id" 2>/dev/null || true
${CONTAINER_RUNTIME} rm "$container_id" 2>/dev/null || true
done
fi
# Stop and remove development container if it exists
local dev_container="zero-os-dev"
if ${CONTAINER_RUNTIME} container exists "$dev_container" 2>/dev/null; then
log_info "Removing development container: $dev_container"
${CONTAINER_RUNTIME} rm -f "$dev_container" 2>/dev/null || true
fi
# Now remove the images
log_info "Removing builder images"
${CONTAINER_RUNTIME} rmi "${BUILDER_IMAGE}" 2>/dev/null || log_warn "Could not remove ${BUILDER_IMAGE} (may not exist)"
${CONTAINER_RUNTIME} rmi "${BUILDER_IMAGE}-cached" 2>/dev/null || log_warn "Could not remove ${BUILDER_IMAGE}-cached (may not exist)"
fi
log_info "Pruning unused containers and images"
${CONTAINER_RUNTIME} system prune -f 2>/dev/null || log_warn "Container prune failed"
log_info "Container cleanup complete"
}
# Check container runtime capabilities
function docker_check_capabilities() {
section_header "Checking Container Capabilities"
# Check user namespace support
if [[ -f /proc/sys/user/max_user_namespaces ]]; then
local max_namespaces=$(cat /proc/sys/user/max_user_namespaces)
log_info "User namespaces available: ${max_namespaces}"
if [[ "$max_namespaces" -eq 0 ]]; then
log_warn "User namespaces are disabled, rootless containers may not work"
fi
fi
# Check subuid/subgid configuration
local current_user=$(whoami)
if [[ -f /etc/subuid ]] && grep -q "^${current_user}:" /etc/subuid; then
log_info "subuid configured for user: ${current_user}"
else
log_warn "subuid not configured for user: ${current_user}"
log_warn "Run: echo '${current_user}:100000:65536' | sudo tee -a /etc/subuid"
fi
if [[ -f /etc/subgid ]] && grep -q "^${current_user}:" /etc/subgid; then
log_info "subgid configured for user: ${current_user}"
else
log_warn "subgid not configured for user: ${current_user}"
log_warn "Run: echo '${current_user}:100000:65536' | sudo tee -a /etc/subgid"
fi
}
# Setup rootless environment
function docker_setup_rootless() {
section_header "Setting Up Rootless Environment"
local current_user=$(whoami)
# Check if running as root
if [[ "$EUID" -eq 0 ]]; then
log_error "Do not run as root. Rootless containers require non-root user."
return 1
fi
# Check and setup subuid/subgid if needed
if ! grep -q "^${current_user}:" /etc/subuid 2>/dev/null; then
log_info "Setting up subuid for ${current_user}"
echo "${current_user}:100000:65536" | sudo tee -a /etc/subuid
fi
if ! grep -q "^${current_user}:" /etc/subgid 2>/dev/null; then
log_info "Setting up subgid for ${current_user}"
echo "${current_user}:100000:65536" | sudo tee -a /etc/subgid
fi
# Initialize container runtime if needed
if [[ "$CONTAINER_RUNTIME" == "podman" ]]; then
log_info "Initializing Podman for rootless use"
safe_execute podman system migrate || true
fi
log_info "Rootless environment setup complete"
}
# Export functions
export -f docker_detect_runtime docker_verify_rootless
export -f docker_build_container docker_create_dockerfile
export -f docker_start_rootless docker_run_build
export -f docker_commit_builder docker_cleanup
export -f docker_check_capabilities docker_setup_rootless