- Rust 83.2%
- Shell 15.9%
- Makefile 0.5%
- Dockerfile 0.2%
- HTML 0.2%
|
All checks were successful
Unit and Integration Test / test (push) Successful in 2m30s
Reviewed-on: #80 Reviewed-by: rawdaGastan <fawzyr@incubaid.com> |
||
|---|---|---|
| .forgejo/workflows | ||
| crates | ||
| dockers | ||
| docs | ||
| scripts | ||
| tests | ||
| .gitignore | ||
| Cargo.lock | ||
| Cargo.toml | ||
| Makefile | ||
| README.md | ||
my_hypervisor
A Docker-like CLI for managing lightweight microVMs powered by Cloud Hypervisor.
my_hypervisor lets you pull OCI container images and run them as hardware-isolated virtual machines with a familiar Docker-style workflow. Each VM boots a real Linux kernel with its own network stack, achieving stronger isolation than containers while maintaining a simple user experience.
Features
- Docker-like CLI --
my_hypervisor run,my_hypervisor exec,my_hypervisor attach,my_hypervisor stop,my_hypervisor rm - OCI image support -- Pull images from Docker Hub and other OCI-compatible registries
- Registry authentication --
my_hypervisor login/my_hypervisor logoutwith credentials stored in~/.docker/config.json - Fast boot -- MicroVMs start in 2-5 seconds
- TAP networking -- Bridge-based networking with NAT, port forwarding, and DNS
- Mycelium networking -- Peer-to-peer IPv6 overlay network with automatic binary injection and IP tracking
- Volume mounts -- Share host directories with VMs via virtiofs
- Exec & attach -- Execute commands or get an interactive shell via vsock
- Kernel management -- Extract, select, and check host kernels automatically
- Resource cleanup -- Ownership-aware garbage collection of TAP devices and iptables rules, with dry-run mode and heuristic fallback
- Resource control -- Configure vCPUs, memory, environment variables per VM
- Short ID resolution -- Reference VMs by name, full ID, or unique ID prefix (e.g.,
my_hypervisor stop a1b) - Self-update --
my_hypervisor updateto check for and install the latest release - Tab completion -- Dynamic shell completions for commands, flags, and VM names/IDs
Requirements
- Linux with KVM support (
/dev/kvm) - Cloud Hypervisor v50+
- virtiofsd (required for volume mounts and explicit
--storage virtiofs; rootfs storage falls back toblockwhen unavailable) - Root privileges (or
doas/sudo) for VM operations mkfs.ext4,iptables,ip(iproute2)
Installation
From Source
git clone https://github.com/geomind/my_hypervisor.git
cd my_hypervisor
# Install musl target (one-time)
rustup target add x86_64-unknown-linux-musl
# Build everything (host CLI + statically-linked guest init)
make build
# Install binaries and shell completions
make install
System Setup
# Check what's needed
my_hypervisor doctor
# Clean up leaked resources from crashed VMs (ownership-aware)
sudo my_hypervisor doctor --gc
# Preview what would be cleaned up
sudo my_hypervisor doctor --gc --dry-run
# Setup kernel and network bridge
sudo my_hypervisor setup
Quick Start
# Check system readiness
my_hypervisor doctor
# Extract a kernel from the running host
sudo my_hypervisor kernel extract-host
# Pull an image and run a VM
sudo my_hypervisor run --rm alpine:latest echo "Hello from a microVM!"
# Run a detached VM
sudo my_hypervisor run -d --name myvm alpine:latest sleep 3600
# Execute a command inside the VM
sudo my_hypervisor exec myvm cat /etc/os-release
# Get an interactive shell (detach with Ctrl+P, Ctrl+Q)
sudo my_hypervisor attach myvm
# Stop and remove
sudo my_hypervisor stop myvm
sudo my_hypervisor rm myvm
Usage
Images
# Pull from Docker Hub
sudo my_hypervisor pull alpine:latest
sudo my_hypervisor pull ubuntu:22.04
# Pull from other registries
sudo my_hypervisor pull ghcr.io/user/repo:v1
# List cached images
my_hypervisor images
# Remove a cached image by ref or digest
my_hypervisor images rm ubuntu:22.04
my_hypervisor rmi sha256:012345...
# Log in to a registry (credentials stored in ~/.docker/config.json)
my_hypervisor login docker.io
my_hypervisor login ghcr.io
# Log out
my_hypervisor logout docker.io
Note: Credentials are stored as base64-encoded tokens in
~/.docker/config.json
VM Lifecycle
# Run foreground (blocks until VM exits)
sudo my_hypervisor run --rm alpine:latest echo hello
# Run detached
sudo my_hypervisor run -d --name web alpine:latest sleep 3600
# Create without starting
sudo my_hypervisor create --name worker alpine:latest sleep 3600
sudo my_hypervisor start worker
# Stop, remove
sudo my_hypervisor stop web
sudo my_hypervisor rm web
# Force remove a running VM
sudo my_hypervisor rm -f worker
# Remove all stopped VMs
sudo my_hypervisor prune
Inspecting VMs
# List running VMs
sudo my_hypervisor list
# List all VMs (including stopped)
sudo my_hypervisor list --all
# Show detailed VM info as JSON
sudo my_hypervisor inspect myvm
# Show console logs
sudo my_hypervisor logs myvm
# Show live resource stats (CPU, memory, network, block I/O)
sudo my_hypervisor stats myvm
# JSON output for scripting
sudo my_hypervisor stats myvm --json
The stats command shows CPU allocation, memory, network info (mode, IP address, gateway, mycelium IP, RX/TX traffic), and block I/O counters for a running VM.
Executing Commands
# Run a command
sudo my_hypervisor exec myvm cat /etc/os-release
# Commands with flags work naturally
sudo my_hypervisor exec myvm ping -c 4 8.8.8.8
# With environment variables (flags before the container name)
sudo my_hypervisor exec -e FOO=bar myvm env
# With working directory
sudo my_hypervisor exec -w /tmp myvm pwd
# Interactive shell (detach with Ctrl+P, Ctrl+Q)
sudo my_hypervisor attach myvm
Networking
# Default: TAP networking with bridge (192.168.200.0/24)
sudo my_hypervisor run -d --name net-demo alpine:latest sleep 3600
# Port forwarding (tcp by default, or specify /udp)
sudo my_hypervisor run -d --name web -p 8080:80 -p 5353:53/udp alpine:latest sleep 3600
# Mycelium peer-to-peer networking
sudo my_hypervisor run -d --name myc1 --network mycelium \
--mycelium-peers tcp://188.40.132.242:9651 \
--mycelium-peers tcp://136.243.47.186:9651 \
alpine:latest sleep 3600
# Inspect to see the mycelium IPv6 address
sudo my_hypervisor inspect myc1 | jq '.network_state.mycelium_ip'
# Check mycelium health status
sudo my_hypervisor inspect myc1 --check-mycelium-health | jq '.network_state.mycelium_health'
# No networking
sudo my_hypervisor run -d --network none alpine:latest sleep 3600
Resources
# Custom CPU and memory
sudo my_hypervisor run --rm --cpus 2 --memory 1024 alpine:latest free -m
# Environment variables
sudo my_hypervisor run --rm -e DB_HOST=localhost -e DB_PORT=5432 alpine:latest env
Storage
# Mount a host directory into the VM
sudo my_hypervisor run --rm -v /path/on/host:/mnt/data alpine:latest ls /mnt/data
# Read-only mount
sudo my_hypervisor run --rm -v /path/on/host:/mnt/data:ro alpine:latest cat /mnt/data/file.txt
# Use virtiofs storage backend explicitly (requires virtiofsd + kernel with virtiofs support)
sudo my_hypervisor run --rm --storage virtiofs alpine:latest echo hello
# Request a larger block-backed rootfs
sudo my_hypervisor run --rm --storage block --disk-size 2G ubuntu:24.04 apt-get update
# Cap the writable layer when using virtiofs
sudo my_hypervisor run --rm --storage virtiofs --storage-quota 4G ubuntu:24.04 bash
# Use a local rootfs directory instead of an OCI image
sudo my_hypervisor run --rm --rootfs /path/to/rootfs alpine:latest /bin/sh -c "echo custom rootfs"
Storage backends:
| Backend | Flag | Description |
|---|---|---|
block |
--storage block |
Creates an ext4 disk image. Works with any kernel. No extra daemons. Used automatically when the preferred virtiofs path is unavailable. |
virtiofs |
--storage virtiofs |
Uses overlay + virtiofsd. Preferred by default when available. Requires virtiofsd and a kernel with CONFIG_FUSE_FS + CONFIG_VIRTIO_FS. The custom kernel (vmlinux-my_hypervisor-6.6.70) has these built-in. |
By default, my_hypervisor prefers virtiofs for the root filesystem. If virtiofsd is not installed or the selected kernel cannot support virtiofs, my_hypervisor run and my_hypervisor create automatically fall back to block instead of failing.
--disk-size overrides the ext4 image size used by the block backend. --storage-quota caps the writable upper layer used by the virtiofs backend.
See Kernel Management for building a custom kernel.
Kernel Management
# List available kernels
my_hypervisor kernel list
# Extract kernel from running host
sudo my_hypervisor kernel extract-host
# Build a custom minimal kernel (recommended for best performance)
bash scripts/build-kernel.sh
my_hypervisor kernel set-default vmlinux-my_hypervisor-6.6.70
# Check kernel capabilities
my_hypervisor kernel check
Configuration
my_hypervisor stores its data in ~/.my_hypervisor/ (configurable via MY_HYPERVISOR_HOME):
~/.my_hypervisor/
config.json # Global configuration
bin/ # Helper binaries (my_hypervisor-init, mycelium)
kernels/ # Kernel binaries
images/ # OCI image cache
vms/ # Per-VM state and data
config.json
{
"base_dir": "/home/user/.my_hypervisor",
"default_vcpu": 1,
"default_memory_mb": 512,
"default_network_mode": "mycelium",
"default_storage_backend": "virtiofs",
"default_block_disk_size_mb": null,
"default_virtiofs_quota_mb": null,
"default_bridge": "my_hypervisor0",
"kernel_path": null,
"virtiofsd_bin": null,
"cloud_hypervisor_bin": null
}
Examples
Web Server Over Mycelium
Run nginx in a VM and access it from anywhere on the mycelium network — no port forwarding needed:
# Start nginx with mycelium networking
sudo my_hypervisor run -d --name web --network mycelium \
--mycelium-peers tcp://188.40.132.242:9651 \
--mycelium-peers tcp://136.243.47.186:9651 \
alpine:latest -- sh -c "\
apk add --no-cache nginx && mkdir -p /var/www /run/nginx && \
echo 'hello from my_hypervisor' > /var/www/index.html && \
printf 'server { listen 80; listen [::]:80; root /var/www; }\n' \
> /etc/nginx/http.d/default.conf && \
nginx -g 'daemon off;'"
# Get the mycelium IPv6 address
WEB_IP=$(sudo my_hypervisor inspect web | jq -r '.network_state.mycelium_ip')
# Access from another VM on the mycelium network
sudo my_hypervisor run --rm --network mycelium \
--mycelium-peers tcp://188.40.132.242:9651 \
alpine:latest -- sh -c "apk add --no-cache curl && curl -s http://[$WEB_IP]/"
# Output: hello from my_hypervisor
Isolated Build Environment
Mount source code into a VM for sandboxed compilation:
sudo my_hypervisor run --rm --cpus 2 --memory 1024 \
-v /home/user/project:/src:ro \
-v /tmp/output:/out \
alpine:latest -- sh -c "\
apk add --no-cache gcc musl-dev make && \
cd /src && make && cp build/app /out/"
Disposable Shell
Run untrusted code in a VM that auto-cleans on exit:
sudo my_hypervisor run --rm alpine:latest -- sh -c "wget -qO- https://example.com/script.sh | sh"
See docs/user-guide.md for more examples.
Architecture
my_hypervisor is structured as a Rust workspace with three crates:
| Crate | Purpose |
|---|---|
my_hypervisor-cli |
CLI binary with clap-based command parsing |
my_hypervisor-lib |
Core library: VM management, OCI, networking, storage, hypervisor API |
my_hypervisor-init |
Guest init binary (PID 1 inside VMs), statically linked with musl |
See docs/architecture.md for detailed design documentation.
Documentation
- Concepts -- Core concepts, how components interact, end-to-end flows
- Architecture -- System design, crate structure, data flow
- Components -- Detailed component design for each subsystem
- User Guide -- Complete CLI reference and configuration
- Tutorial -- Step-by-step getting started guide
- Testing -- Test suite structure and how to run tests
- CI & Release -- Forgejo Actions workflow, release process, and troubleshooting
License
MIT