44 Commits

Author SHA1 Message Date
Maxime Van Hees
da3da0ae30 working ipv6 ip assignment + ssh with login/passwd 2025-08-28 15:19:37 +02:00
Maxime Van Hees
784f87db97 WIP2 2025-08-27 16:03:32 +02:00
Maxime Van Hees
773db2238d working version 1 2025-08-26 17:46:42 +02:00
Maxime Van Hees
e8a369e3a2 WIP2 2025-08-26 17:43:20 +02:00
Maxime Van Hees
4b4f3371b0 WIP: automating VM deployment 2025-08-26 16:50:59 +02:00
Maxime Van Hees
1bb731711b (unstable) pushing WIP 2025-08-25 15:25:00 +02:00
Maxime Van Hees
af89ef0149 networking VMs (WIP) 2025-08-21 18:57:20 +02:00
Maxime Van Hees
768e3e176d fixed overlapping workspace roots 2025-08-21 16:20:15 +02:00
Timur Gordon
aa0248ef17 move rhailib to herolib 2025-08-21 14:32:24 +02:00
Maxime Van Hees
aab2b6f128 fixed cloud hypervisor issues + updated test script (working now) 2025-08-21 13:32:03 +02:00
Maxime Van Hees
d735316b7f cloud-hypervisor SAL + rhai test script for it 2025-08-20 18:01:21 +02:00
Maxime Van Hees
d1c80863b8 fixed test script errors 2025-08-20 15:42:12 +02:00
Maxime Van Hees
169c62da47 Merge branch 'development' of https://git.ourworld.tf/herocode/herolib_rust into development 2025-08-20 14:45:57 +02:00
Maxime Van Hees
33a5f24981 qcow2 SAL + rhai script to test functionality 2025-08-20 14:44:29 +02:00
Timur Gordon
d7562ce466 add data packages and remove empty submodule 2025-08-07 12:13:37 +02:00
ca736d62f3 /// 2025-08-06 03:27:49 +02:00
Maxime Van Hees
078c6f723b merging changes 2025-08-05 20:28:20 +02:00
Maxime Van Hees
9fdb8d8845 integrated hetzner client in repo + showcase of using scope for 'cleaner' scripts 2025-08-05 20:27:14 +02:00
8203a3b1ff Merge branch 'development' of git.ourworld.tf:herocode/herolib_rust into development 2025-08-05 16:39:01 +02:00
1770ac561e ... 2025-08-05 16:39:00 +02:00
Maxime Van Hees
eed6dbf8dc added robot hetzner code to research for later importing it into codebase 2025-08-05 16:32:29 +02:00
4cd4e04028 ... 2025-08-05 16:22:25 +02:00
8cc828fc0e ...... 2025-08-05 16:21:33 +02:00
56af312aad ... 2025-08-05 16:04:55 +02:00
dfd6931c5b ... 2025-08-05 16:00:24 +02:00
6e01f99958 ... 2025-08-05 15:43:13 +02:00
0c02d0e99f ... 2025-08-05 15:33:03 +02:00
7856fc0a4e ... 2025-07-14 13:53:01 +04:00
Mahmoud-Emad
758e59e921 docs: Improve README.md with clearer structure and installation
- Update README.md to provide a clearer structure and improved
  installation instructions.  This makes it easier for users to
  understand and use the library.
- Remove outdated and unnecessary sections like the workspace
  structure details, publishing status, and detailed features
  lists. The information is either not relevant anymore or can be
  found elsewhere.
- Simplify installation instructions to focus on the core aspects
  of installing individual packages or the meta-package with
  features.
- Add a dedicated section for building and running tests,
  improving developer experience and making the process more
  transparent.
- Modernize the overall layout and formatting for better
  readability.
2025-07-13 12:51:08 +03:00
f1806eb788 Merge pull request 'feat: Update SAL Vault examples and documentation' (#24) from development_vault into development
Reviewed-on: herocode/sal#24
2025-07-13 09:31:53 +00:00
Mahmoud-Emad
6e5d9b35e8 feat: Update SAL Vault examples and documentation
- Renamed examples directory to `_archive` to reflect legacy status.
- Updated README.md to reflect current status of vault module,
  including migration from Sameh's implementation to Lee's.
- Temporarily disabled Rhai scripting integration for the vault.
- Added notes regarding current and future development steps.
2025-07-10 14:03:43 +03:00
61f5331804 Merge pull request 'feat: Update zinit-client dependency to 0.4.0' (#23) from development_service_manager into development
Reviewed-on: herocode/sal#23
2025-07-10 08:29:07 +00:00
Mahmoud-Emad
423b7bfa7e feat: Update zinit-client dependency to 0.4.0
- Upgrade `zinit-client` dependency to version 0.4.0 across all
  relevant crates. This resolves potential compatibility issues
  and incorporates bug fixes and improvements from the latest
  release.

- Improve error handling and logging in `zinit-client` and
  `service_manager` to provide more informative feedback and
  prevent potential hangs during log retrieval.  Add timeout to
  prevent indefinite blocking on log retrieval.

- Update `publish-all.sh` script to correctly handle the
  `service_manager` crate during publishing.  Improves handling of
  special cases in the publishing script.

- Add `zinit-client.workspace = true` to `Cargo.toml` to ensure
  consistent dependency management across the workspace.  This
  ensures the correct version of `zinit-client` is used everywhere.
2025-07-10 11:27:59 +03:00
fc2830da31 Merge pull request 'Kubernetes Clusters' (#22) from development_kubernetes_clusters into development
Reviewed-on: herocode/sal#22
2025-07-09 21:41:25 +00:00
Mahmoud-Emad
6b12001ca2 feat: Add Kubernetes examples and update dependencies
- Add Kubernetes examples demonstrating deployment of various
  applications (PostgreSQL, Redis, generic). This improves the
  documentation and provides practical usage examples.
- Add `tokio` dependency for async examples. This enables the use
  of asynchronous operations in the examples.
- Add `once_cell` dependency for improved resource management in
  Kubernetes module. This allows efficient management of
  singletons and other resources.
2025-07-10 00:40:11 +03:00
Mahmoud-Emad
99e121b0d8 feat: Providing some clusters for kubernetes 2025-07-08 16:37:10 +03:00
Mahmoud-Emad
502e345f91 feat: Enhance service manager with zinit socket discovery and systemd fallback
- Improve Linux support by automatically discovering zinit sockets
  using environment variables and common paths.
- Add fallback to systemd if no zinit server is detected.
- Enhance README with detailed instructions for zinit usage,
  including custom socket path configuration.
- Add example demonstrating zinit socket discovery.
- Add logging to show socket discovery process.
- Add unit tests for service manager creation and socket discovery.
2025-07-02 16:37:27 +03:00
Mahmoud-Emad
352e846410 feat: Improve Zinit service manager integration
- Handle arguments and working directory correctly in Zinit: The
  Zinit service manager now correctly handles arguments and
  working directories passed to services, ensuring consistent
  behavior across different service managers.  This fixes issues
  where commands would fail due to incorrect argument parsing or
  missing working directory settings.

- Simplify Zinit service configuration: The Zinit service
  configuration is now simplified, using a more concise and
  readable format. This improves maintainability and reduces the
  complexity of the service configuration process.

- Refactor Zinit service start: This refactors the Zinit service
  start functionality for better readability and maintainability.
  The changes improve the code structure and reduce the complexity
  of the code.
2025-07-02 13:39:11 +03:00
Mahmoud-Emad
b72c50bed9 feat: Improve publish-all.sh script to handle zinit_client
- Correctly handle the `zinit_client` crate name in package
  publication checks and messages.  The script previously
  failed to account for the difference between the directory
  name and the package name.
- Improve the clarity of published crate names in output messages
  for better user understanding.  This makes the output more
  consistent and user friendly.
2025-07-02 12:46:42 +03:00
Mahmoud-Emad
95122dffee feat: Improve service manager testing and error handling
- Add comprehensive testing instructions to README.
- Improve error handling in examples to prevent crashes.
- Enhance launchctl error handling for production safety.
- Improve zinit error handling for production safety.
- Remove obsolete plan_to_fix.md file.
- Update Rhai integration tests for improved robustness.
- Improve service manager creation on Linux with systemd fallback.
2025-07-02 12:05:03 +03:00
Mahmoud-Emad
a63cbe2bd9 Fix service manager examples to use production-ready API
- Updated simple_service.rs to use start(config) instead of create() + start(name)
- Updated service_spaghetti.rs to use the same unified API
- Fixed factory function calls to use create_service_manager() without parameters
- All examples now compile and work with the production-ready synchronous API
- Maintains backward compatibility while providing cleaner interface
2025-07-02 10:46:47 +03:00
Mahmoud-Emad
1e4c0ac41a Resolve merge conflicts - keep production-ready service manager implementation
- Resolved conflicts in service_manager/src/lib.rs
- Resolved conflicts in service_manager/src/launchctl.rs
- Resolved conflicts in service_manager/src/zinit.rs
- Resolved conflicts in service_manager/README.md
- Kept our production-ready synchronous API design
- Maintained comprehensive service lifecycle management
- Preserved cross-platform compatibility (macOS/Linux)
- All tests passing and ready for production use
2025-07-02 10:34:56 +03:00
Mahmoud-Emad
0e49be8d71 Merge branch 'main' of https://git.threefold.info/herocode/sal 2025-07-02 10:25:48 +03:00
Timur Gordon
32339e6063 service manager add examples and improvements 2025-07-02 05:50:18 +02:00
564 changed files with 39576 additions and 2228 deletions

1
.gitignore vendored
View File

@@ -63,3 +63,4 @@ sidebars.ts
tsconfig.json
Cargo.toml.bak
for_augment

View File

@@ -12,22 +12,25 @@ readme = "README.md"
[workspace]
members = [
".",
"vault",
"git",
"redisclient",
"mycelium",
"text",
"os",
"net",
"zinit_client",
"process",
"virt",
"postgresclient",
"kubernetes",
"packages/clients/myceliumclient",
"packages/clients/postgresclient",
"packages/clients/redisclient",
"packages/clients/zinitclient",
"packages/core/net",
"packages/core/text",
"packages/crypt/vault",
"packages/data/ourdb",
"packages/data/radixtree",
"packages/data/tst",
"packages/system/git",
"packages/system/kubernetes",
"packages/system/os",
"packages/system/process",
"packages/system/virt",
"rhai",
"rhailib",
"herodo",
"service_manager",
"packages/clients/hetznerclient",
]
resolver = "2"
@@ -49,7 +52,7 @@ log = "0.4"
once_cell = "1.18.0"
rand = "0.8.5"
regex = "1.8.1"
reqwest = { version = "0.12.15", features = ["json"] }
reqwest = { version = "0.12.15", features = ["json", "blocking"] }
rhai = { version = "1.12.0", features = ["sync"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
@@ -70,6 +73,10 @@ chacha20poly1305 = "0.10.1"
k256 = { version = "0.13.4", features = ["ecdsa", "ecdh"] }
sha2 = "0.10.7"
hex = "0.4"
bincode = { version = "2.0.1", features = ["serde"] }
pbkdf2 = "0.12.2"
getrandom = { version = "0.3.3", features = ["wasm_js"] }
tera = "1.19.0"
# Ethereum dependencies
ethers = { version = "2.0.7", features = ["legacy"] }
@@ -83,28 +90,55 @@ windows = { version = "0.61.1", features = [
] }
# Specialized dependencies
zinit-client = "0.3.0"
zinit-client = "0.4.0"
urlencoding = "2.1.3"
tokio-test = "0.4.4"
kube = { version = "0.95.0", features = ["client", "config", "derive"] }
k8s-openapi = { version = "0.23.0", features = ["latest"] }
tokio-retry = "0.3.0"
governor = "0.6.3"
tower = { version = "0.5.2", features = ["timeout", "limit"] }
serde_yaml = "0.9"
postgres-types = "0.2.5"
r2d2 = "0.8.10"
# SAL dependencies
sal-git = { path = "packages/system/git" }
sal-kubernetes = { path = "packages/system/kubernetes" }
sal-redisclient = { path = "packages/clients/redisclient" }
sal-mycelium = { path = "packages/clients/myceliumclient" }
sal-hetzner = { path = "packages/clients/hetznerclient" }
sal-text = { path = "packages/core/text" }
sal-os = { path = "packages/system/os" }
sal-net = { path = "packages/core/net" }
sal-zinit-client = { path = "packages/clients/zinitclient" }
sal-process = { path = "packages/system/process" }
sal-virt = { path = "packages/system/virt" }
sal-postgresclient = { path = "packages/clients/postgresclient" }
sal-vault = { path = "packages/crypt/vault" }
sal-rhai = { path = "rhai" }
sal-service-manager = { path = "_archive/service_manager" }
[dependencies]
thiserror = "2.0.12" # For error handling in the main Error enum
thiserror = { workspace = true }
tokio = { workspace = true }
# Optional dependencies - users can choose which modules to include
sal-git = { path = "git", optional = true }
sal-kubernetes = { path = "kubernetes", optional = true }
sal-redisclient = { path = "redisclient", optional = true }
sal-mycelium = { path = "mycelium", optional = true }
sal-text = { path = "text", optional = true }
sal-os = { path = "os", optional = true }
sal-net = { path = "net", optional = true }
sal-zinit-client = { path = "zinit_client", optional = true }
sal-process = { path = "process", optional = true }
sal-virt = { path = "virt", optional = true }
sal-postgresclient = { path = "postgresclient", optional = true }
sal-vault = { path = "vault", optional = true }
sal-rhai = { path = "rhai", optional = true }
sal-service-manager = { path = "service_manager", optional = true }
sal-git = { workspace = true, optional = true }
sal-kubernetes = { workspace = true, optional = true }
sal-redisclient = { workspace = true, optional = true }
sal-mycelium = { workspace = true, optional = true }
sal-hetzner = { workspace = true, optional = true }
sal-text = { workspace = true, optional = true }
sal-os = { workspace = true, optional = true }
sal-net = { workspace = true, optional = true }
sal-zinit-client = { workspace = true, optional = true }
sal-process = { workspace = true, optional = true }
sal-virt = { workspace = true, optional = true }
sal-postgresclient = { workspace = true, optional = true }
sal-vault = { workspace = true, optional = true }
sal-rhai = { workspace = true, optional = true }
sal-service-manager = { workspace = true, optional = true }
[features]
default = []
@@ -114,6 +148,7 @@ git = ["dep:sal-git"]
kubernetes = ["dep:sal-kubernetes"]
redisclient = ["dep:sal-redisclient"]
mycelium = ["dep:sal-mycelium"]
hetzner = ["dep:sal-hetzner"]
text = ["dep:sal-text"]
os = ["dep:sal-os"]
net = ["dep:sal-net"]
@@ -123,18 +158,19 @@ virt = ["dep:sal-virt"]
postgresclient = ["dep:sal-postgresclient"]
vault = ["dep:sal-vault"]
rhai = ["dep:sal-rhai"]
service_manager = ["dep:sal-service-manager"]
# service_manager is removed as it's not a direct member anymore
# Convenience feature groups
core = ["os", "process", "text", "net"]
clients = ["redisclient", "postgresclient", "zinit_client", "mycelium"]
infrastructure = ["git", "vault", "kubernetes", "virt", "service_manager"]
clients = ["redisclient", "postgresclient", "zinit_client", "mycelium", "hetzner"]
infrastructure = ["git", "vault", "kubernetes", "virt"]
scripting = ["rhai"]
all = [
"git",
"kubernetes",
"redisclient",
"mycelium",
"hetzner",
"text",
"os",
"net",
@@ -144,5 +180,20 @@ all = [
"postgresclient",
"vault",
"rhai",
"service_manager",
]
# Examples
[[example]]
name = "postgres_cluster"
path = "examples/kubernetes/clusters/postgres.rs"
required-features = ["kubernetes"]
[[example]]
name = "redis_cluster"
path = "examples/kubernetes/clusters/redis.rs"
required-features = ["kubernetes"]
[[example]]
name = "generic_cluster"
path = "examples/kubernetes/clusters/generic.rs"
required-features = ["kubernetes"]

456
README.md
View File

@@ -1,404 +1,136 @@
# SAL (System Abstraction Layer)
# Herocode Herolib Rust Repository
**Version: 0.1.0**
## Overview
SAL is a comprehensive Rust library designed to provide a unified and simplified interface for a wide array of system-level operations and interactions. It abstracts platform-specific details, enabling developers to write robust, cross-platform code with greater ease. SAL also includes `herodo`, a powerful command-line tool for executing Rhai scripts that leverage SAL's capabilities for automation and system management tasks.
This repository contains the **Herocode Herolib** Rust library and a collection of scripts, examples, and utilities for building, testing, and publishing the SAL (System Abstraction Layer) crates. The repository includes:
## 🏗️ **Cargo Workspace Structure**
- **Rust crates** for various system components (e.g., `os`, `process`, `text`, `git`, `vault`, `kubernetes`, etc.).
- **Rhai scripts** and test suites for each crate.
- **Utility scripts** to automate common development tasks.
SAL is organized as a **Cargo workspace** with 15 specialized crates:
## Scripts
- **Root Package**: `sal` - Umbrella crate that re-exports all modules
- **12 Library Crates**: Core SAL modules (os, process, text, net, git, vault, kubernetes, virt, redisclient, postgresclient, zinit_client, mycelium)
- **1 Binary Crate**: `herodo` - Rhai script execution engine
- **1 Integration Crate**: `rhai` - Rhai scripting integration layer
The repository provides three primary helper scripts located in the repository root:
This workspace structure provides excellent build performance, dependency management, and maintainability.
| Script | Description | Typical Usage |
|--------|-------------|--------------|
| `scripts/publish-all.sh` | Publishes all SAL crates to **crates.io** in the correct dependency order. Handles version bumping, dependency updates, dryrun mode, and ratelimiting. | `./scripts/publish-all.sh [--dry-run] [--wait <seconds>] [--version <ver>]` |
| `build_herodo.sh` | Builds the `herodo` binary from the `herodo` package and optionally runs a specified Rhai script. | `./build_herodo.sh [script_name]` |
| `run_rhai_tests.sh` | Executes all Rhai test suites across the repository, logging results and providing a summary. | `./run_rhai_tests.sh` |
### **🚀 Workspace Benefits**
- **Unified Dependency Management**: Shared dependencies across all crates with consistent versions
- **Optimized Build Performance**: Parallel compilation and shared build artifacts
- **Simplified Testing**: Run tests across all modules with a single command
- **Modular Architecture**: Each module is independently maintainable while sharing common infrastructure
- **Production Ready**: 100% test coverage with comprehensive Rhai integration tests
Below are detailed usage instructions for each script.
## 📦 Installation
---
SAL is designed to be modular - install only the components you need!
## 1. `scripts/publish-all.sh`
### Option 1: Individual Crates (Recommended)
### Purpose
Install only the modules you need:
- Publishes each SAL crate in the correct dependency order.
- Updates crate versions (if `--version` is supplied).
- Updates path dependencies to version dependencies before publishing.
- Supports **dryrun** mode to preview actions without publishing.
- Handles ratelimiting between crate publishes.
### Options
| Option | Description |
|--------|-------------|
| `--dry-run` | Shows what would be published without actually publishing. |
| `--wait <seconds>` | Wait time between publishes (default: 15s). |
| `--version <ver>` | Set a new version for all crates (updates `Cargo.toml` files). |
| `-h, --help` | Show help message. |
### Example Usage
```bash
# Currently available packages
cargo add sal-os sal-process sal-text sal-net sal-git sal-vault sal-kubernetes sal-virt
# Dry run no crates will be published
./scripts/publish-all.sh --dry-run
# Coming soon (rate limited)
# cargo add sal-redisclient sal-postgresclient sal-zinit-client sal-mycelium sal-rhai
# Publish with a custom wait time and version bump
./scripts/publish-all.sh --wait 30 --version 1.2.3
# Normal publish (no dryrun)
./scripts/publish-all.sh
```
### Option 2: Meta-crate with Features
### Notes
Use the main `sal` crate with specific features:
- Must be run from the repository root (where `Cargo.toml` lives).
- Requires `cargo` and a loggedin `cargo` session (`cargo login`).
- The script automatically updates dependencies in each crates `Cargo.toml` to use the new version before publishing.
```bash
# Coming soon - meta-crate with features (rate limited)
# cargo add sal --features os,process,text
# cargo add sal --features core # os, process, text, net
# cargo add sal --features infrastructure # git, vault, kubernetes, virt
# cargo add sal --features all
---
# For now, use individual crates (see Option 1 above)
```
## 2. `build_herodo.sh`
### Quick Start Examples
### Purpose
#### Using Individual Crates (Recommended)
```rust
use sal_os::fs;
use sal_process::run;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// File system operations
let files = fs::list_files(".")?;
println!("Found {} files", files.len());
// Process execution
let result = run::command("echo hello")?;
println!("Output: {}", result.stdout);
Ok(())
}
```
#### Using Meta-crate with Features
```rust
// In Cargo.toml: sal = { version = "0.1.0", features = ["os", "process"] }
use sal::os::fs;
use sal::process::run;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// File system operations
let files = fs::list_files(".")?;
println!("Found {} files", files.len());
// Process execution
let result = run::command("echo hello")?;
println!("Output: {}", result.stdout);
Ok(())
}
```
#### Using Herodo for Scripting
```bash
# Build and install herodo
git clone https://github.com/PlanetFirst/sal.git
cd sal
./build_herodo.sh
# Create a script file
cat > example.rhai << 'EOF'
// File operations
let files = find_files(".", "*.rs");
print("Found " + files.len() + " Rust files");
// Process execution
let result = run("echo 'Hello from SAL!'");
print("Output: " + result.stdout);
// Network operations
let reachable = http_check("https://github.com");
print("GitHub reachable: " + reachable);
EOF
# Execute the script
herodo example.rhai
```
## 📦 Available Packages
SAL is published as individual crates, allowing you to install only what you need:
| Package | Description | Install Command |
|---------|-------------|-----------------|
| [`sal-os`](https://crates.io/crates/sal-os) | Operating system operations | `cargo add sal-os` |
| [`sal-process`](https://crates.io/crates/sal-process) | Process management | `cargo add sal-process` |
| [`sal-text`](https://crates.io/crates/sal-text) | Text processing utilities | `cargo add sal-text` |
| [`sal-net`](https://crates.io/crates/sal-net) | Network operations | `cargo add sal-net` |
| [`sal-git`](https://crates.io/crates/sal-git) | Git repository management | `cargo add sal-git` |
| [`sal-vault`](https://crates.io/crates/sal-vault) | Cryptographic operations | `cargo add sal-vault` |
| [`sal-kubernetes`](https://crates.io/crates/sal-kubernetes) | Kubernetes management | `cargo add sal-kubernetes` |
| [`sal-virt`](https://crates.io/crates/sal-virt) | Virtualization tools | `cargo add sal-virt` |
| `sal-redisclient` | Redis database client | `cargo add sal-redisclient` ⏳ |
| `sal-postgresclient` | PostgreSQL client | `cargo add sal-postgresclient` ⏳ |
| `sal-zinit-client` | Zinit process supervisor | `cargo add sal-zinit-client` ⏳ |
| `sal-mycelium` | Mycelium network client | `cargo add sal-mycelium` ⏳ |
| `sal-rhai` | Rhai scripting integration | `cargo add sal-rhai` ⏳ |
| `sal` | Meta-crate with features | `cargo add sal --features all` ⏳ |
| `herodo` | Script executor binary | Build from source ⏳ |
**Legend**: ✅ Published | ⏳ Publishing soon (rate limited)
### 📢 **Publishing Status**
**Currently Available on crates.io:**
- ✅ [`sal-os`](https://crates.io/crates/sal-os) - Operating system operations
- ✅ [`sal-process`](https://crates.io/crates/sal-process) - Process management
- ✅ [`sal-text`](https://crates.io/crates/sal-text) - Text processing utilities
- ✅ [`sal-net`](https://crates.io/crates/sal-net) - Network operations
- ✅ [`sal-git`](https://crates.io/crates/sal-git) - Git repository management
- ✅ [`sal-vault`](https://crates.io/crates/sal-vault) - Cryptographic operations
- ✅ [`sal-kubernetes`](https://crates.io/crates/sal-kubernetes) - Kubernetes management
- ✅ [`sal-virt`](https://crates.io/crates/sal-virt) - Virtualization tools
**Publishing Soon** (hit crates.io rate limit):
-`sal-redisclient`, `sal-postgresclient`, `sal-zinit-client`, `sal-mycelium`
-`sal-rhai`
-`sal` (meta-crate), `herodo` (binary)
**Estimated Timeline**: Remaining packages will be published within 24 hours once the rate limit resets.
## Core Features
SAL offers a broad spectrum of functionalities, including:
- **System Operations**: File and directory management, environment variable access, system information retrieval, and OS-specific commands.
- **Process Management**: Create, monitor, control, and interact with system processes.
- **Containerization Tools**:
- Integration with **Buildah** for building OCI/Docker-compatible container images.
- Integration with **nerdctl** for managing containers (run, stop, list, build, etc.).
- **Version Control**: Programmatic interaction with Git repositories (clone, commit, push, pull, status, etc.).
- **Database Clients**:
- **Redis**: Robust client for interacting with Redis servers.
- **PostgreSQL**: Client for executing queries and managing PostgreSQL databases.
- **Scripting Engine**: In-built support for the **Rhai** scripting language, allowing SAL functionalities to be scripted and automated, primarily through the `herodo` tool.
- **Networking & Services**:
- **Mycelium**: Tools for Mycelium network peer management and message passing.
- **Zinit**: Client for interacting with the Zinit process supervision system.
- **RFS (Remote/Virtual Filesystem)**: Mount, manage, pack, and unpack various types of filesystems (local, SSH, S3, WebDAV).
- **Text Processing**: A suite of utilities for text manipulation, formatting, and regular expressions.
- **Cryptography (`vault`)**: Functions for common cryptographic operations.
## `herodo`: The SAL Scripting Tool
`herodo` is a command-line utility bundled with SAL that executes Rhai scripts. It empowers users to automate tasks and orchestrate complex workflows by leveraging SAL's diverse modules directly from scripts.
- Builds the `herodo` binary from the `herodo` package.
- Copies the binary to a systemwide location (`/usr/local/bin`) if run as root, otherwise to `~/hero/bin`.
- Optionally runs a specified Rhai script after building.
### Usage
```bash
# Execute a single Rhai script
herodo script.rhai
# Build only
./build_herodo.sh
# Execute a script with arguments
herodo script.rhai arg1 arg2
# Execute all .rhai scripts in a directory
herodo /path/to/scripts/
# Build and run a specific Rhai script (e.g., `example`):
./build_herodo.sh example
```
If a directory is provided, `herodo` will execute all `.rhai` scripts within that directory (and its subdirectories) in alphabetical order.
### Details
### Scriptable SAL Modules via `herodo`
- The script changes to its own directory, builds the `herodo` crate (`cargo build`), and copies the binary.
- If a script name is provided, it looks for the script in:
- `src/rhaiexamples/<name>.rhai`
- `src/herodo/scripts/<name>.rhai`
- If the script is not found, the script exits with an error.
The following SAL modules and functionalities are exposed to the Rhai scripting environment through `herodo`:
---
- **OS (`os`)**: Comprehensive file system operations, file downloading & installation, and system package management. [Documentation](os/README.md)
- **Process (`process`)**: Robust command and script execution, plus process management (listing, finding, killing, checking command existence). [Documentation](process/README.md)
- **Text (`text`)**: String manipulation, prefixing, path/name fixing, text replacement, and templating. [Documentation](text/README.md)
- **Net (`net`)**: Network operations, HTTP requests, and connectivity utilities. [Documentation](net/README.md)
- **Git (`git`)**: High-level repository management and generic Git command execution with Redis-backed authentication (clone, pull, push, commit, etc.). [Documentation](git/README.md)
- **Vault (`vault`)**: Cryptographic operations, keypair management, encryption, decryption, hashing, etc. [Documentation](vault/README.md)
- **Redis Client (`redisclient`)**: Execute Redis commands (`redis_get`, `redis_set`, `redis_execute`, etc.). [Documentation](redisclient/README.md)
- **PostgreSQL Client (`postgresclient`)**: Execute SQL queries against PostgreSQL databases. [Documentation](postgresclient/README.md)
- **Zinit (`zinit_client`)**: Client for Zinit process supervisor (service management, logs). [Documentation](zinit_client/README.md)
- **Mycelium (`mycelium`)**: Client for Mycelium decentralized networking API (node info, peer management, messaging). [Documentation](mycelium/README.md)
- **Virtualization (`virt`)**:
- **Buildah**: OCI/Docker image building functions. [Documentation](virt/README.md)
- **nerdctl**: Container lifecycle management (`nerdctl_run`, `nerdctl_stop`, `nerdctl_images`, `nerdctl_image_build`, etc.)
- **RFS**: Mount various filesystems (local, SSH, S3, etc.), pack/unpack filesystem layers.
## 3. `run_rhai_tests.sh`
### Example `herodo` Rhai Script
### Purpose
```rhai
// file: /opt/scripts/example_task.rhai
- Runs **all** Rhai test suites across the repository.
- Supports both the legacy `rhai_tests` directory and the newer `*/tests/rhai` layout.
- Logs output to `run_rhai_tests.log` and prints a summary.
// OS operations
println("Checking for /tmp/my_app_data...");
if !exist("/tmp/my_app_data") {
mkdir("/tmp/my_app_data");
println("Created directory /tmp/my_app_data");
}
// Redis operations
println("Setting Redis key 'app_status' to 'running'");
redis_set("app_status", "running");
let status = redis_get("app_status");
println("Current app_status from Redis: " + status);
// Process execution
println("Listing files in /tmp:");
let output = run("ls -la /tmp");
println(output.stdout);
println("Script finished.");
```
Run with: `herodo /opt/scripts/example_task.rhai`
For more examples, check the individual module test directories (e.g., `text/tests/rhai/`, `os/tests/rhai/`, etc.) in this repository.
## Using SAL as a Rust Library
### Option 1: Individual Crates (Recommended)
Add only the SAL modules you need:
```toml
[dependencies]
sal-os = "0.1.0"
sal-process = "0.1.0"
sal-text = "0.1.0"
```
```rust
use sal_os::fs;
use sal_process::run;
use sal_text::template;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// File operations
let files = fs::list_files(".")?;
println!("Found {} files", files.len());
// Process execution
let result = run::command("echo 'Hello SAL!'")?;
println!("Output: {}", result.stdout);
// Text templating
let template_str = "Hello {{name}}!";
let mut vars = std::collections::HashMap::new();
vars.insert("name".to_string(), "World".to_string());
let rendered = template::render(template_str, &vars)?;
println!("Rendered: {}", rendered);
Ok(())
}
```
### Option 2: Meta-crate with Features (Coming Soon)
```toml
[dependencies]
sal = { version = "0.1.0", features = ["os", "process", "text"] }
```
```rust
use sal::os::fs;
use sal::process::run;
use sal::text::template;
// Same code as above, but using the meta-crate
```
*(Note: The meta-crate `sal` will be available once all individual packages are published.)*
## 🎯 **Why Choose SAL?**
### **Modular Architecture**
- **Install Only What You Need**: Each package is independent - no bloated dependencies
- **Faster Compilation**: Smaller dependency trees mean faster build times
- **Smaller Binaries**: Only include the functionality you actually use
- **Clear Dependencies**: Explicit about what functionality your project uses
### **Developer Experience**
- **Consistent APIs**: All packages follow the same design patterns and conventions
- **Comprehensive Documentation**: Each package has detailed documentation and examples
- **Real-World Tested**: All functionality is production-tested, no placeholder code
- **Type Safety**: Leverages Rust's type system for safe, reliable operations
### **Scripting Power**
- **Herodo Integration**: Execute Rhai scripts with full access to SAL functionality
- **Cross-Platform**: Works consistently across Windows, macOS, and Linux
- **Automation Ready**: Perfect for DevOps, CI/CD, and system administration tasks
## 📦 **Workspace Modules Overview**
SAL is organized as a Cargo workspace with the following crates:
### **Core Library Modules**
- **`sal-os`**: Core OS interactions, file system operations, environment access
- **`sal-process`**: Process creation, management, and control
- **`sal-text`**: Utilities for text processing and manipulation
- **`sal-net`**: Network operations, HTTP requests, and connectivity utilities
### **Integration Modules**
- **`sal-git`**: Git repository management and operations
- **`sal-vault`**: Cryptographic functions and keypair management
- **`sal-rhai`**: Integration layer for the Rhai scripting engine, used by `herodo`
### **Client Modules**
- **`sal-redisclient`**: Client for Redis database interactions
- **`sal-postgresclient`**: Client for PostgreSQL database interactions
- **`sal-zinit-client`**: Client for Zinit process supervisor
- **`sal-mycelium`**: Client for Mycelium network operations
### **Specialized Modules**
- **`sal-virt`**: Virtualization-related utilities (buildah, nerdctl, rfs)
### **Root Package & Binary**
- **`sal`**: Root umbrella crate that re-exports all modules
- **`herodo`**: Command-line binary for executing Rhai scripts
## 🔨 **Building SAL**
Build the entire workspace (all crates) using Cargo:
### Usage
```bash
# Build all workspace members
cargo build --workspace
# Build for release
cargo build --workspace --release
# Build specific crate
cargo build -p sal-text
cargo build -p herodo
```
The `herodo` executable will be located at `target/debug/herodo` or `target/release/herodo`.
## 🧪 **Running Tests**
### **Rust Unit Tests**
```bash
# Run all workspace tests
cargo test --workspace
# Run tests for specific crate
cargo test -p sal-text
cargo test -p sal-os
# Run only library tests (faster)
cargo test --workspace --lib
```
### **Rhai Integration Tests**
Run comprehensive Rhai script tests that exercise `herodo` and SAL's scripted functionalities:
```bash
# Run all Rhai integration tests (16 modules)
# Run all tests
./run_rhai_tests.sh
# Results: 16/16 modules pass with 100% success rate
```
The Rhai tests validate real-world functionality across all SAL modules and provide comprehensive integration testing.
### Output
- Colored console output for readability.
- Log file (`run_rhai_tests.log`) contains full output for later review.
- Summary includes total modules, passed, and failed counts.
- Exit code `0` if all tests pass, `1` otherwise.
---
## General Development Workflow
1. **Build**: Use `build_herodo.sh` to compile the `herodo` binary.
2. **Test**: Run `run_rhai_tests.sh` to ensure all Rhai scripts pass.
3. **Publish**: When ready to release, use `scripts/publish-all.sh` (with `--dry-run` first to verify).
## Prerequisites
- **Rust toolchain** (`cargo`, `rustc`) installed.
- **Rhai** interpreter (`herodo`) built and available.
- **Git** for version control.
- **Cargo login** for publishing to crates.io.
## License
SAL is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.
See `LICENSE` for details.
---
**Happy coding!**

View File

@@ -17,7 +17,7 @@ serde_json = { workspace = true }
futures = { workspace = true }
once_cell = { workspace = true }
# Use base zinit-client instead of SAL wrapper
zinit-client = { version = "0.3.0" }
zinit-client = { version = "0.4.0" }
# Optional Rhai integration
rhai = { workspace = true, optional = true }
@@ -36,6 +36,7 @@ rhai = ["dep:rhai"]
tokio-test = "0.4"
rhai = { workspace = true }
tempfile = { workspace = true }
env_logger = "0.10"
[[test]]
name = "zinit_integration_tests"

View File

@@ -129,17 +129,64 @@ herodo examples/service_manager/basic_usage.rhai
See `examples/service_manager/README.md` for detailed documentation.
## Testing
Run the test suite:
```bash
cargo test -p sal-service-manager
```
For Rhai integration tests:
```bash
cargo test -p sal-service-manager --features rhai
```
### Testing with Herodo
To test the service manager with real Rhai scripts using herodo, first build herodo:
```bash
./build_herodo.sh
```
Then run Rhai scripts that use the service manager:
```bash
herodo your_service_script.rhai
```
## Prerequisites
### Linux (zinit)
### Linux (zinit/systemd)
Make sure zinit is installed and running:
The service manager automatically discovers running zinit servers and falls back to systemd if none are found.
**For zinit (recommended):**
```bash
# Start zinit with default socket
zinit -s /tmp/zinit.sock init
# Or with a custom socket path
zinit -s /var/run/zinit.sock init
```
**Socket Discovery:**
The service manager will automatically find running zinit servers by checking:
1. `ZINIT_SOCKET_PATH` environment variable (if set)
2. Common socket locations: `/var/run/zinit.sock`, `/tmp/zinit.sock`, `/run/zinit.sock`, `./zinit.sock`
**Custom socket path:**
```bash
# Set custom socket path
export ZINIT_SOCKET_PATH=/your/custom/path/zinit.sock
```
**Systemd fallback:**
If no zinit server is detected, the service manager automatically falls back to systemd.
### macOS (launchctl)
No additional setup required - uses the built-in launchctl system.

View File

@@ -0,0 +1,47 @@
# Service Manager Examples
This directory contains examples demonstrating the usage of the `sal-service-manager` crate.
## Running Examples
To run any example, use the following command structure from the `service_manager` crate's root directory:
```sh
cargo run --example <EXAMPLE_NAME>
```
---
### 1. `simple_service`
This example demonstrates the ideal, clean lifecycle of a service using the separated `create` and `start` steps.
**Behavior:**
1. Creates a new service definition.
2. Starts the newly created service.
3. Checks its status to confirm it's running.
4. Stops the service.
5. Checks its status again to confirm it's stopped.
6. Removes the service definition.
**Run it:**
```sh
cargo run --example simple_service
```
### 2. `service_spaghetti`
This example demonstrates how the service manager handles "messy" or improper sequences of operations, showcasing its error handling and robustness.
**Behavior:**
1. Creates a service.
2. Starts the service.
3. Tries to start the **same service again** (which should fail as it's already running).
4. Removes the service **without stopping it first** (the manager should handle this gracefully).
5. Tries to stop the **already removed** service (which should fail).
6. Tries to remove the service **again** (which should also fail).
**Run it:**
```sh
cargo run --example service_spaghetti
```

View File

@@ -0,0 +1,109 @@
//! service_spaghetti - An example of messy service management.
//!
//! This example demonstrates how the service manager behaves when commands
//! are issued in a less-than-ideal order, such as starting a service that's
//! already running or removing a service that hasn't been stopped.
use sal_service_manager::{create_service_manager, ServiceConfig};
use std::collections::HashMap;
use std::thread;
use std::time::Duration;
fn main() {
// Initialize logging to see socket discovery in action
env_logger::init();
let manager = match create_service_manager() {
Ok(manager) => manager,
Err(e) => {
eprintln!("Error: Failed to create service manager: {}", e);
return;
}
};
let service_name = "com.herocode.examples.spaghetti";
let service_config = ServiceConfig {
name: service_name.to_string(),
binary_path: "/bin/sh".to_string(),
args: vec![
"-c".to_string(),
"while true; do echo 'Spaghetti service is running...'; sleep 5; done".to_string(),
],
working_directory: None,
environment: HashMap::new(),
auto_restart: false,
};
println!("--- Service Spaghetti Example ---");
println!("This example demonstrates messy, error-prone service management.");
// Cleanup from previous runs to ensure a clean slate
if let Ok(true) = manager.exists(service_name) {
println!(
"\nService '{}' found from a previous run. Cleaning up first.",
service_name
);
let _ = manager.stop(service_name);
let _ = manager.remove(service_name);
println!("Cleanup complete.");
}
// 1. Start the service (creates and starts in one step)
println!("\n1. Starting the service for the first time...");
match manager.start(&service_config) {
Ok(()) => println!(" -> Success: Service '{}' started.", service_name),
Err(e) => {
eprintln!(
" -> Error: Failed to start service: {}. Halting example.",
e
);
return;
}
}
thread::sleep(Duration::from_secs(2));
// 2. Try to start the service again while it's already running
println!("\n2. Trying to start the *same service* again...");
match manager.start(&service_config) {
Ok(()) => println!(" -> Unexpected Success: Service started again."),
Err(e) => eprintln!(
" -> Expected Error: {}. The manager should detect it is already running.",
e
),
}
// 3. Let it run for a bit
println!("\n3. Letting the service run for 5 seconds...");
thread::sleep(Duration::from_secs(5));
// 4. Remove the service without stopping it first
// The `remove` function is designed to stop the service if it's running.
println!("\n4. Removing the service without explicitly stopping it first...");
match manager.remove(service_name) {
Ok(()) => println!(" -> Success: Service was stopped and removed."),
Err(e) => eprintln!(" -> Error: Failed to remove service: {}", e),
}
// 5. Try to stop the service after it has been removed
println!("\n5. Trying to stop the service that was just removed...");
match manager.stop(service_name) {
Ok(()) => println!(" -> Unexpected Success: Stopped a removed service."),
Err(e) => eprintln!(
" -> Expected Error: {}. The manager knows the service is gone.",
e
),
}
// 6. Try to remove the service again
println!("\n6. Trying to remove the service again...");
match manager.remove(service_name) {
Ok(()) => println!(" -> Unexpected Success: Removed a non-existent service."),
Err(e) => eprintln!(
" -> Expected Error: {}. The manager correctly reports it's not found.",
e
),
}
println!("\n--- Spaghetti Example Finished ---");
}

View File

@@ -0,0 +1,110 @@
use sal_service_manager::{create_service_manager, ServiceConfig};
use std::collections::HashMap;
use std::thread;
use std::time::Duration;
fn main() {
// Initialize logging to see socket discovery in action
env_logger::init();
// 1. Create a service manager for the current platform
let manager = match create_service_manager() {
Ok(manager) => manager,
Err(e) => {
eprintln!("Error: Failed to create service manager: {}", e);
return;
}
};
// 2. Define the configuration for our new service
let service_name = "com.herocode.examples.simpleservice";
let service_config = ServiceConfig {
name: service_name.to_string(),
// A simple command that runs in a loop
binary_path: "/bin/sh".to_string(),
args: vec![
"-c".to_string(),
"while true; do echo 'Simple service is running...'; date; sleep 5; done".to_string(),
],
working_directory: None,
environment: HashMap::new(),
auto_restart: false,
};
println!("--- Service Manager Example ---");
// Cleanup from previous runs, if necessary
if let Ok(true) = manager.exists(service_name) {
println!(
"Service '{}' already exists. Cleaning up before starting.",
service_name
);
if let Err(e) = manager.stop(service_name) {
println!(
"Note: could not stop existing service (it might not be running): {}",
e
);
}
if let Err(e) = manager.remove(service_name) {
eprintln!("Error: failed to remove existing service: {}", e);
return;
}
println!("Cleanup complete.");
}
// 3. Start the service (creates and starts in one step)
println!("\n1. Starting service: '{}'", service_name);
match manager.start(&service_config) {
Ok(()) => println!("Service '{}' started successfully.", service_name),
Err(e) => {
eprintln!("Error: Failed to start service '{}': {}", service_name, e);
return;
}
}
// Give it a moment to run
println!("\nWaiting for 2 seconds for the service to initialize...");
thread::sleep(Duration::from_secs(2));
// 4. Check the status of the service
println!("\n2. Checking service status...");
match manager.status(service_name) {
Ok(status) => println!("Service status: {:?}", status),
Err(e) => eprintln!(
"Error: Failed to get status for service '{}': {}",
service_name, e
),
}
println!("\nLetting the service run for 10 seconds. Check logs if you can.");
thread::sleep(Duration::from_secs(10));
// 5. Stop the service
println!("\n3. Stopping service: '{}'", service_name);
match manager.stop(service_name) {
Ok(()) => println!("Service '{}' stopped successfully.", service_name),
Err(e) => eprintln!("Error: Failed to stop service '{}': {}", service_name, e),
}
println!("\nWaiting for 2 seconds for the service to stop...");
thread::sleep(Duration::from_secs(2));
// Check status again
println!("\n4. Checking status after stopping...");
match manager.status(service_name) {
Ok(status) => println!("Service status: {:?}", status),
Err(e) => eprintln!(
"Error: Failed to get status for service '{}': {}",
service_name, e
),
}
// 6. Remove the service
println!("\n5. Removing service: '{}'", service_name);
match manager.remove(service_name) {
Ok(()) => println!("Service '{}' removed successfully.", service_name),
Err(e) => eprintln!("Error: Failed to remove service '{}': {}", service_name, e),
}
println!("\n--- Example Finished ---");
}

View File

@@ -0,0 +1,47 @@
//! Socket Discovery Test
//!
//! This example demonstrates the zinit socket discovery functionality.
//! It shows how the service manager finds available zinit sockets.
use sal_service_manager::create_service_manager;
fn main() {
// Initialize logging to see socket discovery in action
env_logger::init();
println!("=== Zinit Socket Discovery Test ===");
println!("This test demonstrates how the service manager discovers zinit sockets.");
println!();
// Test environment variable
if let Ok(socket_path) = std::env::var("ZINIT_SOCKET_PATH") {
println!("🔍 ZINIT_SOCKET_PATH environment variable set to: {}", socket_path);
} else {
println!("🔍 ZINIT_SOCKET_PATH environment variable not set");
}
println!();
println!("🚀 Creating service manager...");
match create_service_manager() {
Ok(_manager) => {
println!("✅ Service manager created successfully!");
#[cfg(target_os = "macos")]
println!("📱 Platform: macOS - Using launchctl");
#[cfg(target_os = "linux")]
println!("🐧 Platform: Linux - Check logs above for socket discovery details");
}
Err(e) => {
println!("❌ Failed to create service manager: {}", e);
}
}
println!();
println!("=== Test Complete ===");
println!();
println!("To test zinit socket discovery on Linux:");
println!("1. Start zinit: zinit -s /tmp/zinit.sock init");
println!("2. Run with logging: RUST_LOG=debug cargo run --example socket_discovery_test -p sal-service-manager");
println!("3. Or set custom path: ZINIT_SOCKET_PATH=/custom/path.sock RUST_LOG=debug cargo run --example socket_discovery_test -p sal-service-manager");
}

View File

@@ -6,10 +6,25 @@ use std::path::PathBuf;
use tokio::process::Command;
use tokio::runtime::Runtime;
// Shared runtime for async operations
static ASYNC_RUNTIME: Lazy<Runtime> = Lazy::new(|| {
Runtime::new().expect("Failed to create async runtime for LaunchctlServiceManager")
});
// Shared runtime for async operations - production-safe initialization
static ASYNC_RUNTIME: Lazy<Option<Runtime>> = Lazy::new(|| Runtime::new().ok());
/// Get the async runtime, creating a temporary one if the static runtime failed
fn get_runtime() -> Result<Runtime, ServiceManagerError> {
// Try to use the static runtime first
if let Some(_runtime) = ASYNC_RUNTIME.as_ref() {
// We can't return a reference to the static runtime because we need ownership
// for block_on, so we create a new one. This is a reasonable trade-off for safety.
Runtime::new().map_err(|e| {
ServiceManagerError::Other(format!("Failed to create async runtime: {}", e))
})
} else {
// Static runtime failed, try to create a new one
Runtime::new().map_err(|e| {
ServiceManagerError::Other(format!("Failed to create async runtime: {}", e))
})
}
}
#[derive(Debug)]
pub struct LaunchctlServiceManager {
@@ -221,8 +236,9 @@ impl ServiceManager for LaunchctlServiceManager {
}
fn start(&self, config: &ServiceConfig) -> Result<(), ServiceManagerError> {
// Use the shared runtime for async operations
ASYNC_RUNTIME.block_on(async {
// Use production-safe runtime for async operations
let runtime = get_runtime()?;
runtime.block_on(async {
let label = self.get_service_label(&config.name);
// Check if service is already loaded
@@ -249,7 +265,8 @@ impl ServiceManager for LaunchctlServiceManager {
}
fn start_existing(&self, service_name: &str) -> Result<(), ServiceManagerError> {
ASYNC_RUNTIME.block_on(async {
let runtime = get_runtime()?;
runtime.block_on(async {
let label = self.get_service_label(service_name);
let plist_path = self.get_plist_path(service_name);
@@ -300,8 +317,9 @@ impl ServiceManager for LaunchctlServiceManager {
// First start the service
self.start(config)?;
// Then wait for confirmation using the shared runtime
ASYNC_RUNTIME.block_on(async {
// Then wait for confirmation using production-safe runtime
let runtime = get_runtime()?;
runtime.block_on(async {
self.wait_for_service_status(&config.name, timeout_secs)
.await
})
@@ -315,15 +333,17 @@ impl ServiceManager for LaunchctlServiceManager {
// First start the existing service
self.start_existing(service_name)?;
// Then wait for confirmation using the shared runtime
ASYNC_RUNTIME.block_on(async {
// Then wait for confirmation using production-safe runtime
let runtime = get_runtime()?;
runtime.block_on(async {
self.wait_for_service_status(service_name, timeout_secs)
.await
})
}
fn stop(&self, service_name: &str) -> Result<(), ServiceManagerError> {
ASYNC_RUNTIME.block_on(async {
let runtime = get_runtime()?;
runtime.block_on(async {
let _label = self.get_service_label(service_name);
let plist_path = self.get_plist_path(service_name);
@@ -359,7 +379,8 @@ impl ServiceManager for LaunchctlServiceManager {
}
fn status(&self, service_name: &str) -> Result<ServiceStatus, ServiceManagerError> {
ASYNC_RUNTIME.block_on(async {
let runtime = get_runtime()?;
runtime.block_on(async {
let label = self.get_service_label(service_name);
let plist_path = self.get_plist_path(service_name);
@@ -397,7 +418,8 @@ impl ServiceManager for LaunchctlServiceManager {
service_name: &str,
lines: Option<usize>,
) -> Result<String, ServiceManagerError> {
ASYNC_RUNTIME.block_on(async {
let runtime = get_runtime()?;
runtime.block_on(async {
let log_path = self.get_log_path(service_name);
if !log_path.exists() {
@@ -421,7 +443,8 @@ impl ServiceManager for LaunchctlServiceManager {
}
fn list(&self) -> Result<Vec<String>, ServiceManagerError> {
ASYNC_RUNTIME.block_on(async {
let runtime = get_runtime()?;
runtime.block_on(async {
let list_output = self.run_launchctl(&["list"]).await?;
let services: Vec<String> = list_output
@@ -456,8 +479,9 @@ impl ServiceManager for LaunchctlServiceManager {
);
}
// Remove the plist file using the shared runtime
ASYNC_RUNTIME.block_on(async {
// Remove the plist file using production-safe runtime
let runtime = get_runtime()?;
runtime.block_on(async {
let plist_path = self.get_plist_path(service_name);
if plist_path.exists() {
tokio::fs::remove_file(&plist_path).await?;

View File

@@ -0,0 +1,301 @@
use std::collections::HashMap;
use thiserror::Error;
#[derive(Error, Debug)]
pub enum ServiceManagerError {
#[error("Service '{0}' not found")]
ServiceNotFound(String),
#[error("Service '{0}' already exists")]
ServiceAlreadyExists(String),
#[error("Failed to start service '{0}': {1}")]
StartFailed(String, String),
#[error("Failed to stop service '{0}': {1}")]
StopFailed(String, String),
#[error("Failed to restart service '{0}': {1}")]
RestartFailed(String, String),
#[error("Failed to get logs for service '{0}': {1}")]
LogsFailed(String, String),
#[error("IO error: {0}")]
IoError(#[from] std::io::Error),
#[error("Service manager error: {0}")]
Other(String),
}
#[derive(Debug, Clone)]
pub struct ServiceConfig {
pub name: String,
pub binary_path: String,
pub args: Vec<String>,
pub working_directory: Option<String>,
pub environment: HashMap<String, String>,
pub auto_restart: bool,
}
#[derive(Debug, Clone, PartialEq)]
pub enum ServiceStatus {
Running,
Stopped,
Failed,
Unknown,
}
pub trait ServiceManager: Send + Sync {
/// Check if a service exists
fn exists(&self, service_name: &str) -> Result<bool, ServiceManagerError>;
/// Start a service with the given configuration
fn start(&self, config: &ServiceConfig) -> Result<(), ServiceManagerError>;
/// Start an existing service by name (load existing plist/config)
fn start_existing(&self, service_name: &str) -> Result<(), ServiceManagerError>;
/// Start a service and wait for confirmation that it's running or failed
fn start_and_confirm(
&self,
config: &ServiceConfig,
timeout_secs: u64,
) -> Result<(), ServiceManagerError>;
/// Start an existing service and wait for confirmation that it's running or failed
fn start_existing_and_confirm(
&self,
service_name: &str,
timeout_secs: u64,
) -> Result<(), ServiceManagerError>;
/// Stop a service by name
fn stop(&self, service_name: &str) -> Result<(), ServiceManagerError>;
/// Restart a service by name
fn restart(&self, service_name: &str) -> Result<(), ServiceManagerError>;
/// Get the status of a service
fn status(&self, service_name: &str) -> Result<ServiceStatus, ServiceManagerError>;
/// Get logs for a service
fn logs(&self, service_name: &str, lines: Option<usize>)
-> Result<String, ServiceManagerError>;
/// List all managed services
fn list(&self) -> Result<Vec<String>, ServiceManagerError>;
/// Remove a service configuration (stop if running)
fn remove(&self, service_name: &str) -> Result<(), ServiceManagerError>;
}
// Platform-specific implementations
#[cfg(target_os = "macos")]
mod launchctl;
#[cfg(target_os = "macos")]
pub use launchctl::LaunchctlServiceManager;
#[cfg(target_os = "linux")]
mod systemd;
#[cfg(target_os = "linux")]
pub use systemd::SystemdServiceManager;
mod zinit;
pub use zinit::ZinitServiceManager;
#[cfg(feature = "rhai")]
pub mod rhai;
/// Discover available zinit socket paths
///
/// This function checks for zinit sockets in the following order:
/// 1. Environment variable ZINIT_SOCKET_PATH (if set)
/// 2. Common socket locations with connectivity testing
///
/// # Returns
///
/// Returns the first working socket path found, or None if no working zinit server is detected.
#[cfg(target_os = "linux")]
fn discover_zinit_socket() -> Option<String> {
// First check environment variable
if let Ok(env_socket_path) = std::env::var("ZINIT_SOCKET_PATH") {
log::debug!("Checking ZINIT_SOCKET_PATH: {}", env_socket_path);
if test_zinit_socket(&env_socket_path) {
log::info!(
"Using zinit socket from ZINIT_SOCKET_PATH: {}",
env_socket_path
);
return Some(env_socket_path);
} else {
log::warn!(
"ZINIT_SOCKET_PATH specified but socket is not accessible: {}",
env_socket_path
);
}
}
// Try common socket locations
let common_paths = [
"/var/run/zinit.sock",
"/tmp/zinit.sock",
"/run/zinit.sock",
"./zinit.sock",
];
log::debug!("Discovering zinit socket from common locations...");
for path in &common_paths {
log::debug!("Testing socket path: {}", path);
if test_zinit_socket(path) {
log::info!("Found working zinit socket at: {}", path);
return Some(path.to_string());
}
}
log::debug!("No working zinit socket found");
None
}
/// Test if a zinit socket is accessible and responsive
///
/// This function attempts to create a ZinitServiceManager and perform a basic
/// connectivity test by listing services.
#[cfg(target_os = "linux")]
fn test_zinit_socket(socket_path: &str) -> bool {
// Check if socket file exists first
if !std::path::Path::new(socket_path).exists() {
log::debug!("Socket file does not exist: {}", socket_path);
return false;
}
// Try to create a manager and test basic connectivity
match ZinitServiceManager::new(socket_path) {
Ok(manager) => {
// Test basic connectivity by trying to list services
match manager.list() {
Ok(_) => {
log::debug!("Socket {} is responsive", socket_path);
true
}
Err(e) => {
log::debug!("Socket {} exists but not responsive: {}", socket_path, e);
false
}
}
}
Err(e) => {
log::debug!("Failed to create manager for socket {}: {}", socket_path, e);
false
}
}
}
/// Create a service manager appropriate for the current platform
///
/// - On macOS: Uses launchctl for service management
/// - On Linux: Uses zinit for service management with systemd fallback
///
/// # Returns
///
/// Returns a Result containing the service manager or an error if initialization fails.
/// On Linux, it first tries to discover a working zinit socket. If no zinit server is found,
/// it will fall back to systemd.
///
/// # Environment Variables
///
/// - `ZINIT_SOCKET_PATH`: Specifies the zinit socket path (Linux only)
///
/// # Errors
///
/// Returns `ServiceManagerError` if:
/// - The platform is not supported (Windows, etc.)
/// - Service manager initialization fails on all available backends
pub fn create_service_manager() -> Result<Box<dyn ServiceManager>, ServiceManagerError> {
#[cfg(target_os = "macos")]
{
Ok(Box::new(LaunchctlServiceManager::new()))
}
#[cfg(target_os = "linux")]
{
// Try to discover a working zinit socket
if let Some(socket_path) = discover_zinit_socket() {
match ZinitServiceManager::new(&socket_path) {
Ok(zinit_manager) => {
log::info!("Using zinit service manager with socket: {}", socket_path);
return Ok(Box::new(zinit_manager));
}
Err(zinit_error) => {
log::warn!(
"Failed to create zinit manager for discovered socket {}: {}",
socket_path,
zinit_error
);
}
}
} else {
log::info!("No running zinit server detected. To use zinit, start it with: zinit -s /tmp/zinit.sock init");
}
// Fallback to systemd
log::info!("Falling back to systemd service manager");
Ok(Box::new(SystemdServiceManager::new()))
}
#[cfg(not(any(target_os = "macos", target_os = "linux")))]
{
Err(ServiceManagerError::Other(
"Service manager not implemented for this platform".to_string(),
))
}
}
/// Create a service manager for zinit with a custom socket path
///
/// This is useful when zinit is running with a non-default socket path
pub fn create_zinit_service_manager(
socket_path: &str,
) -> Result<Box<dyn ServiceManager>, ServiceManagerError> {
Ok(Box::new(ZinitServiceManager::new(socket_path)?))
}
/// Create a service manager for systemd (Linux alternative)
///
/// This creates a systemd-based service manager as an alternative to zinit on Linux
#[cfg(target_os = "linux")]
pub fn create_systemd_service_manager() -> Box<dyn ServiceManager> {
Box::new(SystemdServiceManager::new())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_create_service_manager() {
// This test ensures the service manager can be created without panicking
let result = create_service_manager();
assert!(result.is_ok(), "Service manager creation should succeed");
}
#[cfg(target_os = "linux")]
#[test]
fn test_socket_discovery_with_env_var() {
// Test that environment variable is respected
std::env::set_var("ZINIT_SOCKET_PATH", "/test/path.sock");
// The discover function should check the env var first
// Since the socket doesn't exist, it should return None, but we can't test
// the actual discovery logic without a real socket
std::env::remove_var("ZINIT_SOCKET_PATH");
}
#[cfg(target_os = "linux")]
#[test]
fn test_socket_discovery_without_env_var() {
// Ensure env var is not set
std::env::remove_var("ZINIT_SOCKET_PATH");
// The discover function should try common paths
// Since no zinit is running, it should return None
let result = discover_zinit_socket();
// This is expected to be None in test environment
assert!(
result.is_none(),
"Should return None when no zinit server is running"
);
}
}

View File

@@ -14,10 +14,12 @@ pub struct RhaiServiceManager {
}
impl RhaiServiceManager {
pub fn new() -> Self {
Self {
inner: Arc::new(create_service_manager()),
}
pub fn new() -> Result<Self, Box<EvalAltResult>> {
let manager = create_service_manager()
.map_err(|e| format!("Failed to create service manager: {}", e))?;
Ok(Self {
inner: Arc::new(manager),
})
}
}
@@ -25,7 +27,10 @@ impl RhaiServiceManager {
pub fn register_service_manager_module(engine: &mut Engine) -> Result<(), Box<EvalAltResult>> {
// Factory function to create service manager
engine.register_type::<RhaiServiceManager>();
engine.register_fn("create_service_manager", RhaiServiceManager::new);
engine.register_fn(
"create_service_manager",
|| -> Result<RhaiServiceManager, Box<EvalAltResult>> { RhaiServiceManager::new() },
);
// Service management functions
engine.register_fn(

View File

@@ -1,5 +1,4 @@
use crate::{ServiceConfig, ServiceManager, ServiceManagerError, ServiceStatus};
use std::collections::HashMap;
use std::fs;
use std::path::PathBuf;
use std::process::Command;

View File

@@ -7,9 +7,25 @@ use tokio::runtime::Runtime;
use tokio::time::timeout;
use zinit_client::{ServiceStatus as ZinitServiceStatus, ZinitClient, ZinitError};
// Shared runtime for async operations
static ASYNC_RUNTIME: Lazy<Runtime> =
Lazy::new(|| Runtime::new().expect("Failed to create async runtime for ZinitServiceManager"));
// Shared runtime for async operations - production-safe initialization
static ASYNC_RUNTIME: Lazy<Option<Runtime>> = Lazy::new(|| Runtime::new().ok());
/// Get the async runtime, creating a temporary one if the static runtime failed
fn get_runtime() -> Result<Runtime, ServiceManagerError> {
// Try to use the static runtime first
if let Some(_runtime) = ASYNC_RUNTIME.as_ref() {
// We can't return a reference to the static runtime because we need ownership
// for block_on, so we create a new one. This is a reasonable trade-off for safety.
Runtime::new().map_err(|e| {
ServiceManagerError::Other(format!("Failed to create async runtime: {}", e))
})
} else {
// Static runtime failed, try to create a new one
Runtime::new().map_err(|e| {
ServiceManagerError::Other(format!("Failed to create async runtime: {}", e))
})
}
}
pub struct ZinitServiceManager {
client: Arc<ZinitClient>,
@@ -32,16 +48,21 @@ impl ZinitServiceManager {
// Check if we're already in a tokio runtime context
if let Ok(_handle) = tokio::runtime::Handle::try_current() {
// We're in an async context, use spawn_blocking to avoid nested runtime
let result = std::thread::spawn(move || {
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(operation)
})
let result = std::thread::spawn(
move || -> Result<Result<T, ZinitError>, ServiceManagerError> {
let rt = Runtime::new().map_err(|e| {
ServiceManagerError::Other(format!("Failed to create runtime: {}", e))
})?;
Ok(rt.block_on(operation))
},
)
.join()
.map_err(|_| ServiceManagerError::Other("Thread join failed".to_string()))?;
result.map_err(|e| ServiceManagerError::Other(e.to_string()))
result?.map_err(|e| ServiceManagerError::Other(e.to_string()))
} else {
// No current runtime, use the shared runtime
ASYNC_RUNTIME
// No current runtime, use production-safe runtime
let runtime = get_runtime()?;
runtime
.block_on(operation)
.map_err(|e| ServiceManagerError::Other(e.to_string()))
}
@@ -79,8 +100,9 @@ impl ZinitServiceManager {
})?
.map_err(|e| ServiceManagerError::Other(e.to_string()))
} else {
// No current runtime, use the shared runtime
ASYNC_RUNTIME
// No current runtime, use production-safe runtime
let runtime = get_runtime()?;
runtime
.block_on(timeout_op)
.map_err(|_| {
ServiceManagerError::Other(format!(
@@ -104,14 +126,27 @@ impl ServiceManager for ZinitServiceManager {
}
fn start(&self, config: &ServiceConfig) -> Result<(), ServiceManagerError> {
let service_config = json!({
"exec": config.binary_path,
"args": config.args,
"working_directory": config.working_directory,
// Build the exec command with args
let mut exec_command = config.binary_path.clone();
if !config.args.is_empty() {
exec_command.push(' ');
exec_command.push_str(&config.args.join(" "));
}
// Create zinit-compatible service configuration
let mut service_config = json!({
"exec": exec_command,
"oneshot": !config.auto_restart, // zinit uses oneshot, not restart
"env": config.environment,
"restart": config.auto_restart,
});
// Add optional fields if present
if let Some(ref working_dir) = config.working_directory {
// Zinit doesn't support working_directory directly, so we need to modify the exec command
let cd_command = format!("cd {} && {}", working_dir, exec_command);
service_config["exec"] = json!(cd_command);
}
let client = Arc::clone(&self.client);
let service_name = config.name.clone();
self.execute_async(
@@ -271,6 +306,8 @@ impl ServiceManager for ZinitServiceManager {
let logs = self
.execute_async(async move {
use futures::StreamExt;
use tokio::time::{timeout, Duration};
let mut log_stream = client
.logs(false, Some(service_name_owned.as_str()))
.await?;
@@ -279,7 +316,10 @@ impl ServiceManager for ZinitServiceManager {
// Collect logs from the stream with a reasonable limit
let mut count = 0;
const MAX_LOGS: usize = 100;
const LOG_TIMEOUT: Duration = Duration::from_secs(5);
// Use timeout to prevent hanging
let result = timeout(LOG_TIMEOUT, async {
while let Some(log_result) = log_stream.next().await {
match log_result {
Ok(log_entry) => {
@@ -292,6 +332,17 @@ impl ServiceManager for ZinitServiceManager {
Err(_) => break,
}
}
})
.await;
// Handle timeout - this is not an error, just means no more logs available
if result.is_err() {
log::debug!(
"Log reading timed out after {} seconds, returning {} logs",
LOG_TIMEOUT.as_secs(),
logs.len()
);
}
Ok::<Vec<String>, ZinitError>(logs)
})

View File

@@ -4,7 +4,7 @@ use std::collections::HashMap;
#[test]
fn test_create_service_manager() {
// Test that the factory function creates the appropriate service manager for the platform
let manager = create_service_manager();
let manager = create_service_manager().expect("Failed to create service manager");
// Test basic functionality - should be able to call methods without panicking
let list_result = manager.list();
@@ -12,7 +12,10 @@ fn test_create_service_manager() {
// The result might be an error (if no service system is available), but it shouldn't panic
match list_result {
Ok(services) => {
println!("✓ Service manager created successfully, found {} services", services.len());
println!(
"✓ Service manager created successfully, found {} services",
services.len()
);
}
Err(e) => {
println!("✓ Service manager created, but got expected error: {}", e);
@@ -63,8 +66,14 @@ fn test_service_config_creation() {
assert!(env_config.args.is_empty());
assert_eq!(env_config.working_directory, Some("/tmp".to_string()));
assert_eq!(env_config.environment.len(), 2);
assert_eq!(env_config.environment.get("PATH"), Some(&"/usr/bin:/bin".to_string()));
assert_eq!(env_config.environment.get("HOME"), Some(&"/tmp".to_string()));
assert_eq!(
env_config.environment.get("PATH"),
Some(&"/usr/bin:/bin".to_string())
);
assert_eq!(
env_config.environment.get("HOME"),
Some(&"/tmp".to_string())
);
assert!(env_config.auto_restart);
println!("✓ Environment service config created successfully");
@@ -91,7 +100,10 @@ fn test_service_config_clone() {
assert_eq!(original_config.name, cloned_config.name);
assert_eq!(original_config.binary_path, cloned_config.binary_path);
assert_eq!(original_config.args, cloned_config.args);
assert_eq!(original_config.working_directory, cloned_config.working_directory);
assert_eq!(
original_config.working_directory,
cloned_config.working_directory
);
assert_eq!(original_config.environment, cloned_config.environment);
assert_eq!(original_config.auto_restart, cloned_config.auto_restart);
@@ -110,10 +122,16 @@ fn test_macos_service_manager() {
let list_result = manager.list();
match list_result {
Ok(services) => {
println!("✓ macOS LaunchctlServiceManager created successfully, found {} services", services.len());
println!(
"✓ macOS LaunchctlServiceManager created successfully, found {} services",
services.len()
);
}
Err(e) => {
println!("✓ macOS LaunchctlServiceManager created, but got expected error: {}", e);
println!(
"✓ macOS LaunchctlServiceManager created, but got expected error: {}",
e
);
}
}
}
@@ -130,10 +148,16 @@ fn test_linux_service_manager() {
let list_result = manager.list();
match list_result {
Ok(services) => {
println!("✓ Linux SystemdServiceManager created successfully, found {} services", services.len());
println!(
"✓ Linux SystemdServiceManager created successfully, found {} services",
services.len()
);
}
Err(e) => {
println!("✓ Linux SystemdServiceManager created, but got expected error: {}", e);
println!(
"✓ Linux SystemdServiceManager created, but got expected error: {}",
e
);
}
}
}
@@ -157,7 +181,10 @@ fn test_service_status_debug() {
assert!(!debug_str.is_empty());
assert_eq!(status, &cloned);
println!("✓ ServiceStatus::{:?} debug and clone work correctly", status);
println!(
"✓ ServiceStatus::{:?} debug and clone work correctly",
status
);
}
}
@@ -191,7 +218,8 @@ fn test_service_manager_error_debug() {
#[test]
fn test_service_manager_trait_object() {
// Test that we can use ServiceManager as a trait object
let manager: Box<dyn ServiceManager> = create_service_manager();
let manager: Box<dyn ServiceManager> =
create_service_manager().expect("Failed to create service manager");
// Test that we can call methods through the trait object
let list_result = manager.list();

View File

@@ -0,0 +1,177 @@
// Service lifecycle management test script
// This script tests REAL complete service lifecycle scenarios
print("=== Service Lifecycle Management Test ===");
// Create service manager
let manager = create_service_manager();
print("✓ Service manager created");
// Test configuration - real services for testing
let test_services = [
#{
name: "lifecycle-test-1",
binary_path: "/bin/echo",
args: ["Lifecycle test 1"],
working_directory: "/tmp",
environment: #{},
auto_restart: false
},
#{
name: "lifecycle-test-2",
binary_path: "/bin/echo",
args: ["Lifecycle test 2"],
working_directory: "/tmp",
environment: #{ "TEST_VAR": "test_value" },
auto_restart: false
}
];
let total_tests = 0;
let passed_tests = 0;
// Test 1: Service Creation and Start
print("\n1. Testing service creation and start...");
for service_config in test_services {
print(`\nStarting service: ${service_config.name}`);
try {
start(manager, service_config);
print(` ✓ Service ${service_config.name} started successfully`);
passed_tests += 1;
} catch(e) {
print(` ✗ Service ${service_config.name} start failed: ${e}`);
}
total_tests += 1;
}
// Test 2: Service Existence Check
print("\n2. Testing service existence checks...");
for service_config in test_services {
print(`\nChecking existence of: ${service_config.name}`);
try {
let service_exists = exists(manager, service_config.name);
if service_exists {
print(` ✓ Service ${service_config.name} exists: ${service_exists}`);
passed_tests += 1;
} else {
print(` ✗ Service ${service_config.name} doesn't exist after start`);
}
} catch(e) {
print(` ✗ Existence check failed for ${service_config.name}: ${e}`);
}
total_tests += 1;
}
// Test 3: Status Check
print("\n3. Testing status checks...");
for service_config in test_services {
print(`\nChecking status of: ${service_config.name}`);
try {
let service_status = status(manager, service_config.name);
print(` ✓ Service ${service_config.name} status: ${service_status}`);
passed_tests += 1;
} catch(e) {
print(` ✗ Status check failed for ${service_config.name}: ${e}`);
}
total_tests += 1;
}
// Test 4: Service List Check
print("\n4. Testing service list...");
try {
let services = list(manager);
print(` ✓ Service list retrieved (${services.len()} services)`);
// Check if our test services are in the list
for service_config in test_services {
let found = false;
for service in services {
if service.contains(service_config.name) {
found = true;
print(` ✓ Found ${service_config.name} in list`);
break;
}
}
if !found {
print(` ⚠ ${service_config.name} not found in service list`);
}
}
passed_tests += 1;
} catch(e) {
print(` ✗ Service list failed: ${e}`);
}
total_tests += 1;
// Test 5: Service Stop
print("\n5. Testing service stop...");
for service_config in test_services {
print(`\nStopping service: ${service_config.name}`);
try {
stop(manager, service_config.name);
print(` ✓ Service ${service_config.name} stopped successfully`);
passed_tests += 1;
} catch(e) {
print(` ✗ Service ${service_config.name} stop failed: ${e}`);
}
total_tests += 1;
}
// Test 6: Service Removal
print("\n6. Testing service removal...");
for service_config in test_services {
print(`\nRemoving service: ${service_config.name}`);
try {
remove(manager, service_config.name);
print(` ✓ Service ${service_config.name} removed successfully`);
passed_tests += 1;
} catch(e) {
print(` ✗ Service ${service_config.name} removal failed: ${e}`);
}
total_tests += 1;
}
// Test 7: Cleanup Verification
print("\n7. Testing cleanup verification...");
for service_config in test_services {
print(`\nVerifying removal of: ${service_config.name}`);
try {
let exists_after_remove = exists(manager, service_config.name);
if !exists_after_remove {
print(` ✓ Service ${service_config.name} correctly doesn't exist after removal`);
passed_tests += 1;
} else {
print(` ✗ Service ${service_config.name} still exists after removal`);
}
} catch(e) {
print(` ✗ Cleanup verification failed for ${service_config.name}: ${e}`);
}
total_tests += 1;
}
// Test Summary
print("\n=== Lifecycle Test Summary ===");
print(`Services tested: ${test_services.len()}`);
print(`Total operations: ${total_tests}`);
print(`Successful operations: ${passed_tests}`);
print(`Failed operations: ${total_tests - passed_tests}`);
print(`Success rate: ${(passed_tests * 100) / total_tests}%`);
if passed_tests == total_tests {
print("\n🎉 All lifecycle tests passed!");
print("Service manager is working correctly across all scenarios.");
} else {
print(`\n⚠ ${total_tests - passed_tests} test(s) failed`);
print("Some service manager operations need attention.");
}
print("\n=== Service Lifecycle Test Complete ===");
// Return test results
#{
summary: #{
total_tests: total_tests,
passed_tests: passed_tests,
success_rate: (passed_tests * 100) / total_tests,
services_tested: test_services.len()
}
}

View File

@@ -0,0 +1,218 @@
// Basic service manager functionality test script
// This script tests the REAL service manager through Rhai integration
print("=== Service Manager Basic Functionality Test ===");
// Test configuration
let test_service_name = "rhai-test-service";
let test_binary = "/bin/echo";
let test_args = ["Hello from Rhai service manager test"];
print(`Testing service: ${test_service_name}`);
print(`Binary: ${test_binary}`);
print(`Args: ${test_args}`);
// Test results tracking
let test_results = #{
creation: "NOT_RUN",
exists_before: "NOT_RUN",
start: "NOT_RUN",
exists_after: "NOT_RUN",
status: "NOT_RUN",
list: "NOT_RUN",
stop: "NOT_RUN",
remove: "NOT_RUN",
cleanup: "NOT_RUN"
};
let passed_tests = 0;
let total_tests = 0;
// Test 1: Service Manager Creation
print("\n1. Testing service manager creation...");
try {
let manager = create_service_manager();
print("✓ Service manager created successfully");
test_results["creation"] = "PASS";
passed_tests += 1;
total_tests += 1;
} catch(e) {
print(`✗ Service manager creation failed: ${e}`);
test_results["creation"] = "FAIL";
total_tests += 1;
// Return early if we can't create the manager
return test_results;
}
// Create the service manager for all subsequent tests
let manager = create_service_manager();
// Test 2: Check if service exists before creation
print("\n2. Testing service existence check (before creation)...");
try {
let exists_before = exists(manager, test_service_name);
print(`✓ Service existence check: ${exists_before}`);
if !exists_before {
print("✓ Service correctly doesn't exist before creation");
test_results["exists_before"] = "PASS";
passed_tests += 1;
} else {
print("⚠ Service unexpectedly exists before creation");
test_results["exists_before"] = "WARN";
}
total_tests += 1;
} catch(e) {
print(`✗ Service existence check failed: ${e}`);
test_results["exists_before"] = "FAIL";
total_tests += 1;
}
// Test 3: Start the service
print("\n3. Testing service start...");
try {
// Create a service configuration object
let service_config = #{
name: test_service_name,
binary_path: test_binary,
args: test_args,
working_directory: "/tmp",
environment: #{},
auto_restart: false
};
start(manager, service_config);
print("✓ Service started successfully");
test_results["start"] = "PASS";
passed_tests += 1;
total_tests += 1;
} catch(e) {
print(`✗ Service start failed: ${e}`);
test_results["start"] = "FAIL";
total_tests += 1;
}
// Test 4: Check if service exists after creation
print("\n4. Testing service existence check (after creation)...");
try {
let exists_after = exists(manager, test_service_name);
print(`✓ Service existence check: ${exists_after}`);
if exists_after {
print("✓ Service correctly exists after creation");
test_results["exists_after"] = "PASS";
passed_tests += 1;
} else {
print("✗ Service doesn't exist after creation");
test_results["exists_after"] = "FAIL";
}
total_tests += 1;
} catch(e) {
print(`✗ Service existence check failed: ${e}`);
test_results["exists_after"] = "FAIL";
total_tests += 1;
}
// Test 5: Check service status
print("\n5. Testing service status...");
try {
let service_status = status(manager, test_service_name);
print(`✓ Service status: ${service_status}`);
test_results["status"] = "PASS";
passed_tests += 1;
total_tests += 1;
} catch(e) {
print(`✗ Service status check failed: ${e}`);
test_results["status"] = "FAIL";
total_tests += 1;
}
// Test 6: List services
print("\n6. Testing service list...");
try {
let services = list(manager);
print("✓ Service list retrieved");
// Skip service search due to Rhai type constraints with Vec iteration
print(" ⚠️ Skipping service search due to Rhai type constraints");
test_results["list"] = "PASS";
passed_tests += 1;
total_tests += 1;
} catch(e) {
print(`✗ Service list failed: ${e}`);
test_results["list"] = "FAIL";
total_tests += 1;
}
// Test 7: Stop the service
print("\n7. Testing service stop...");
try {
stop(manager, test_service_name);
print(`✓ Service stopped: ${test_service_name}`);
test_results["stop"] = "PASS";
passed_tests += 1;
total_tests += 1;
} catch(e) {
print(`✗ Service stop failed: ${e}`);
test_results["stop"] = "FAIL";
total_tests += 1;
}
// Test 8: Remove the service
print("\n8. Testing service remove...");
try {
remove(manager, test_service_name);
print(`✓ Service removed: ${test_service_name}`);
test_results["remove"] = "PASS";
passed_tests += 1;
total_tests += 1;
} catch(e) {
print(`✗ Service remove failed: ${e}`);
test_results["remove"] = "FAIL";
total_tests += 1;
}
// Test 9: Verify cleanup
print("\n9. Testing cleanup verification...");
try {
let exists_after_remove = exists(manager, test_service_name);
if !exists_after_remove {
print("✓ Service correctly doesn't exist after removal");
test_results["cleanup"] = "PASS";
passed_tests += 1;
} else {
print("✗ Service still exists after removal");
test_results["cleanup"] = "FAIL";
}
total_tests += 1;
} catch(e) {
print(`✗ Cleanup verification failed: ${e}`);
test_results["cleanup"] = "FAIL";
total_tests += 1;
}
// Test Summary
print("\n=== Test Summary ===");
print(`Total tests: ${total_tests}`);
print(`Passed: ${passed_tests}`);
print(`Failed: ${total_tests - passed_tests}`);
print(`Success rate: ${(passed_tests * 100) / total_tests}%`);
print("\nDetailed Results:");
for test_name in test_results.keys() {
let result = test_results[test_name];
let status_icon = if result == "PASS" { "✓" } else if result == "FAIL" { "✗" } else { "⚠" };
print(` ${status_icon} ${test_name}: ${result}`);
}
if passed_tests == total_tests {
print("\n🎉 All tests passed!");
} else {
print(`\n⚠ ${total_tests - passed_tests} test(s) failed`);
}
print("\n=== Service Manager Basic Test Complete ===");
// Return test results for potential use by calling code
test_results

View File

@@ -4,14 +4,18 @@ use std::path::Path;
/// Helper function to create a Rhai engine for service manager testing
fn create_service_manager_engine() -> Result<Engine, Box<EvalAltResult>> {
let engine = Engine::new();
// Register any custom functions that would be needed for service manager integration
// For now, we'll keep it simple since the actual service manager integration
// would require more complex setup
#[cfg(feature = "rhai")]
{
let mut engine = Engine::new();
// Register the service manager module for real testing
sal_service_manager::rhai::register_service_manager_module(&mut engine)?;
Ok(engine)
}
#[cfg(not(feature = "rhai"))]
{
Ok(Engine::new())
}
}
/// Helper function to run a Rhai script file
fn run_rhai_script(script_path: &str) -> Result<rhai::Dynamic, Box<EvalAltResult>> {
@@ -65,7 +69,7 @@ fn test_rhai_service_manager_basic() {
}
Err(e) => {
println!("✗ Rhai basic test failed: {}", e);
panic!("Rhai script execution failed");
assert!(false, "Rhai script execution failed: {}", e);
}
}
}
@@ -112,7 +116,7 @@ fn test_rhai_service_lifecycle() {
}
Err(e) => {
println!("✗ Rhai lifecycle test failed: {}", e);
panic!("Rhai script execution failed");
assert!(false, "Rhai script execution failed: {}", e);
}
}
}
@@ -155,7 +159,7 @@ fn test_rhai_engine_functionality() {
println!("✓ All basic Rhai functionality tests passed");
} else {
println!("✗ Some basic Rhai functionality tests failed");
panic!("Basic Rhai tests failed");
assert!(false, "Basic Rhai tests failed");
}
}
}
@@ -181,7 +185,7 @@ fn test_rhai_engine_functionality() {
}
Err(e) => {
println!("✗ Basic Rhai functionality test failed: {}", e);
panic!("Basic Rhai test failed");
assert!(false, "Basic Rhai test failed: {}", e);
}
}
}
@@ -201,7 +205,10 @@ fn test_rhai_script_error_handling() {
match engine.eval::<rhai::Dynamic>(error_script) {
Ok(_) => {
println!("⚠ Expected error but script succeeded");
panic!("Error handling test failed - expected error but got success");
assert!(
false,
"Error handling test failed - expected error but got success"
);
}
Err(e) => {
println!("✓ Error correctly caught: {}", e);
@@ -228,16 +235,16 @@ fn test_rhai_script_files_exist() {
match fs::read_to_string(script_file) {
Ok(content) => {
if content.trim().is_empty() {
panic!("Script file {} is empty", script_file);
assert!(false, "Script file {} is empty", script_file);
}
println!(" Content length: {} characters", content.len());
}
Err(e) => {
panic!("Failed to read script file {}: {}", script_file, e);
assert!(false, "Failed to read script file {}: {}", script_file, e);
}
}
} else {
panic!("Required script file not found: {}", script_file);
assert!(false, "Required script file not found: {}", script_file);
}
}

0
cargo_instructions.md Normal file
View File

View File

@@ -1,64 +1,76 @@
# Hero Vault Cryptography Examples
# SAL Vault Examples
This directory contains examples demonstrating the Hero Vault cryptography functionality integrated into the SAL project.
This directory contains examples demonstrating the SAL Vault functionality.
## Overview
Hero Vault provides cryptographic operations including:
SAL Vault provides secure key management and cryptographic operations including:
- Key space management (creation, loading, encryption, decryption)
- Keypair management (creation, selection, listing)
- Digital signatures (signing and verification)
- Symmetric encryption (key generation, encryption, decryption)
- Ethereum wallet functionality
- Smart contract interactions
- Key-value store with encryption
- Vault creation and management
- KeySpace operations (encrypted key-value stores)
- Symmetric key generation and operations
- Asymmetric key operations (signing and verification)
- Secure key derivation from passwords
## Example Files
## Current Status
- `example.rhai` - Basic example demonstrating key management, signing, and encryption
- `advanced_example.rhai` - Advanced example with error handling, conditional logic, and more complex operations
- `key_persistence_example.rhai` - Demonstrates creating and saving a key space to disk
- `load_existing_space.rhai` - Shows how to load a previously created key space and use its keypairs
- `contract_example.rhai` - Demonstrates loading a contract ABI and interacting with smart contracts
- `agung_send_transaction.rhai` - Demonstrates sending native tokens on the Agung network
- `agung_contract_with_args.rhai` - Shows how to interact with contracts with arguments on Agung
⚠️ **Note**: The vault module is currently being updated to use Lee's implementation.
The Rhai scripting integration is temporarily disabled while we adapt the examples
to work with the new vault API.
## Running the Examples
## Available Operations
You can run the examples using the `herodo` tool that comes with the SAL project:
- **Vault Management**: Create and manage vault instances
- **KeySpace Operations**: Open encrypted key-value stores within vaults
- **Symmetric Encryption**: Generate keys and encrypt/decrypt data
- **Asymmetric Operations**: Create keypairs, sign messages, verify signatures
```bash
# Run a single example
herodo --path example.rhai
## Example Files (Legacy - Sameh's Implementation)
# Run all examples using the provided script
./run_examples.sh
⚠️ **These examples are currently archived and use the previous vault implementation**:
- `_archive/example.rhai` - Basic example demonstrating key management, signing, and encryption
- `_archive/advanced_example.rhai` - Advanced example with error handling and complex operations
- `_archive/key_persistence_example.rhai` - Demonstrates creating and saving a key space to disk
- `_archive/load_existing_space.rhai` - Shows how to load a previously created key space
- `_archive/contract_example.rhai` - Demonstrates smart contract interactions (Ethereum)
- `_archive/agung_send_transaction.rhai` - Demonstrates Ethereum transactions on Agung network
- `_archive/agung_contract_with_args.rhai` - Shows contract interactions with arguments
## Current Implementation (Lee's Vault)
The current vault implementation provides:
```rust
// Create a new vault
let vault = Vault::new(&path).await?;
// Open an encrypted keyspace
let keyspace = vault.open_keyspace("my_space", "password").await?;
// Perform cryptographic operations
// (API documentation coming soon)
```
## Key Space Storage
## Migration Status
Key spaces are stored in the `~/.hero-vault/key-spaces/` directory by default. Each key space is stored in a separate JSON file named after the key space (e.g., `my_space.json`).
## Ethereum Functionality
The Hero Vault module provides comprehensive Ethereum wallet functionality:
- Creating and managing wallets for different networks
- Sending ETH transactions
- Checking balances
- Interacting with smart contracts (read and write functions)
- Support for multiple networks (Ethereum, Gnosis, Peaq, Agung, etc.)
-**Vault Core**: Lee's implementation is active
-**Archive**: Sameh's implementation preserved in `vault/_archive/`
-**Rhai Integration**: Being developed for Lee's implementation
-**Examples**: Will be updated to use Lee's API
-**Ethereum Features**: Not available in Lee's implementation
## Security
Key spaces are encrypted with ChaCha20Poly1305 using a key derived from the provided password. The encryption ensures that the key material is secure at rest.
The vault uses:
## Best Practices
- **ChaCha20Poly1305** for symmetric encryption
- **Password-based key derivation** for keyspace encryption
- **Secure key storage** with proper isolation
1. **Use Strong Passwords**: Since the security of your key spaces depends on the strength of your passwords, use strong, unique passwords.
2. **Backup Key Spaces**: Regularly backup your key spaces directory to prevent data loss.
3. **Script Organization**: Split your scripts into logical units, with separate scripts for key creation and key usage.
4. **Error Handling**: Always check the return values of functions to ensure operations succeeded before proceeding.
5. **Network Selection**: When working with Ethereum functionality, be explicit about which network you're targeting to avoid confusion.
6. **Gas Management**: For Ethereum transactions, consider gas costs and set appropriate gas limits.
## Next Steps
1. **Rhai Integration**: Implement Rhai bindings for Lee's vault
2. **New Examples**: Create examples using Lee's simpler API
3. **Documentation**: Complete API documentation for Lee's implementation
4. **Migration Guide**: Provide guidance for users migrating from Sameh's implementation

View File

@@ -0,0 +1,134 @@
//! Generic Application Deployment Example
//!
//! This example shows how to deploy any containerized application using the
//! KubernetesManager convenience methods. This works for any Docker image.
use sal_kubernetes::KubernetesManager;
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create Kubernetes manager
let km = KubernetesManager::new("default").await?;
// Clean up any existing resources first
println!("=== Cleaning up existing resources ===");
let apps_to_clean = ["web-server", "node-app", "mongodb"];
for app in &apps_to_clean {
match km.deployment_delete(app).await {
Ok(_) => println!("✓ Deleted existing deployment: {}", app),
Err(_) => println!("✓ No existing deployment to delete: {}", app),
}
match km.service_delete(app).await {
Ok(_) => println!("✓ Deleted existing service: {}", app),
Err(_) => println!("✓ No existing service to delete: {}", app),
}
}
// Example 1: Simple web server deployment
println!("\n=== Example 1: Simple Nginx Web Server ===");
km.deploy_application("web-server", "nginx:latest", 2, 80, None, None)
.await?;
println!("✅ Nginx web server deployed!");
// Example 2: Node.js application with labels
println!("\n=== Example 2: Node.js Application ===");
let mut node_labels = HashMap::new();
node_labels.insert("app".to_string(), "node-app".to_string());
node_labels.insert("tier".to_string(), "backend".to_string());
node_labels.insert("environment".to_string(), "production".to_string());
// Configure Node.js environment variables
let mut node_env_vars = HashMap::new();
node_env_vars.insert("NODE_ENV".to_string(), "production".to_string());
node_env_vars.insert("PORT".to_string(), "3000".to_string());
node_env_vars.insert("LOG_LEVEL".to_string(), "info".to_string());
node_env_vars.insert("MAX_CONNECTIONS".to_string(), "1000".to_string());
km.deploy_application(
"node-app", // name
"node:18-alpine", // image
3, // replicas - scale to 3 instances
3000, // port
Some(node_labels), // labels
Some(node_env_vars), // environment variables
)
.await?;
println!("✅ Node.js application deployed!");
// Example 3: Database deployment (any database)
println!("\n=== Example 3: MongoDB Database ===");
let mut mongo_labels = HashMap::new();
mongo_labels.insert("app".to_string(), "mongodb".to_string());
mongo_labels.insert("type".to_string(), "database".to_string());
mongo_labels.insert("engine".to_string(), "mongodb".to_string());
// Configure MongoDB environment variables
let mut mongo_env_vars = HashMap::new();
mongo_env_vars.insert(
"MONGO_INITDB_ROOT_USERNAME".to_string(),
"admin".to_string(),
);
mongo_env_vars.insert(
"MONGO_INITDB_ROOT_PASSWORD".to_string(),
"mongopassword".to_string(),
);
mongo_env_vars.insert("MONGO_INITDB_DATABASE".to_string(), "myapp".to_string());
km.deploy_application(
"mongodb", // name
"mongo:6.0", // image
1, // replicas - single instance for simplicity
27017, // port
Some(mongo_labels), // labels
Some(mongo_env_vars), // environment variables
)
.await?;
println!("✅ MongoDB deployed!");
// Check status of all deployments
println!("\n=== Checking Deployment Status ===");
let deployments = km.deployments_list().await?;
for deployment in &deployments {
if let Some(name) = &deployment.metadata.name {
let total_replicas = deployment
.spec
.as_ref()
.and_then(|s| s.replicas)
.unwrap_or(0);
let ready_replicas = deployment
.status
.as_ref()
.and_then(|s| s.ready_replicas)
.unwrap_or(0);
println!(
"{}: {}/{} replicas ready",
name, ready_replicas, total_replicas
);
}
}
println!("\n🎉 All deployments completed!");
println!("\n💡 Key Points:");
println!(" • Any Docker image can be deployed using this simple interface");
println!(" • Use labels to organize and identify your applications");
println!(
" • The same method works for databases, web servers, APIs, and any containerized app"
);
println!(" • For advanced configuration, use the individual KubernetesManager methods");
println!(
" • Environment variables and resource limits can be added via direct Kubernetes API"
);
Ok(())
}

View File

@@ -0,0 +1,79 @@
//! PostgreSQL Cluster Deployment Example (Rhai)
//!
//! This script shows how to deploy a PostgreSQL cluster using Rhai scripting
//! with the KubernetesManager convenience methods.
print("=== PostgreSQL Cluster Deployment ===");
// Create Kubernetes manager for the database namespace
print("Creating Kubernetes manager for 'database' namespace...");
let km = kubernetes_manager_new("database");
print("✓ Kubernetes manager created");
// Create the namespace if it doesn't exist
print("Creating namespace 'database' if it doesn't exist...");
try {
create_namespace(km, "database");
print("✓ Namespace 'database' created");
} catch(e) {
if e.to_string().contains("already exists") {
print("✓ Namespace 'database' already exists");
} else {
print("⚠️ Warning: " + e);
}
}
// Clean up any existing resources first
print("\nCleaning up any existing PostgreSQL resources...");
try {
delete_deployment(km, "postgres-cluster");
print("✓ Deleted existing deployment");
} catch(e) {
print("✓ No existing deployment to delete");
}
try {
delete_service(km, "postgres-cluster");
print("✓ Deleted existing service");
} catch(e) {
print("✓ No existing service to delete");
}
// Create PostgreSQL cluster using the convenience method
print("\nDeploying PostgreSQL cluster...");
try {
// Deploy PostgreSQL using the convenience method
let result = deploy_application(km, "postgres-cluster", "postgres:15", 2, 5432, #{
"app": "postgres-cluster",
"type": "database",
"engine": "postgresql"
}, #{
"POSTGRES_DB": "myapp",
"POSTGRES_USER": "postgres",
"POSTGRES_PASSWORD": "secretpassword",
"PGDATA": "/var/lib/postgresql/data/pgdata"
});
print("✓ " + result);
print("\n✅ PostgreSQL cluster deployed successfully!");
print("\n📋 Connection Information:");
print(" Host: postgres-cluster.database.svc.cluster.local");
print(" Port: 5432");
print(" Database: postgres (default)");
print(" Username: postgres (default)");
print("\n🔧 To connect from another pod:");
print(" psql -h postgres-cluster.database.svc.cluster.local -U postgres");
print("\n💡 Next steps:");
print(" • Set POSTGRES_PASSWORD environment variable");
print(" • Configure persistent storage");
print(" • Set up backup and monitoring");
} catch(e) {
print("❌ Failed to deploy PostgreSQL cluster: " + e);
}
print("\n=== Deployment Complete ===");

View File

@@ -0,0 +1,112 @@
//! PostgreSQL Cluster Deployment Example
//!
//! This example shows how to deploy a PostgreSQL cluster using the
//! KubernetesManager convenience methods.
use sal_kubernetes::KubernetesManager;
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create Kubernetes manager for the database namespace
let km = KubernetesManager::new("database").await?;
// Create the namespace if it doesn't exist
println!("Creating namespace 'database' if it doesn't exist...");
match km.namespace_create("database").await {
Ok(_) => println!("✓ Namespace 'database' created"),
Err(e) => {
if e.to_string().contains("already exists") {
println!("✓ Namespace 'database' already exists");
} else {
return Err(e.into());
}
}
}
// Clean up any existing resources first
println!("Cleaning up any existing PostgreSQL resources...");
match km.deployment_delete("postgres-cluster").await {
Ok(_) => println!("✓ Deleted existing deployment"),
Err(_) => println!("✓ No existing deployment to delete"),
}
match km.service_delete("postgres-cluster").await {
Ok(_) => println!("✓ Deleted existing service"),
Err(_) => println!("✓ No existing service to delete"),
}
// Configure PostgreSQL-specific labels
let mut labels = HashMap::new();
labels.insert("app".to_string(), "postgres-cluster".to_string());
labels.insert("type".to_string(), "database".to_string());
labels.insert("engine".to_string(), "postgresql".to_string());
// Configure PostgreSQL environment variables
let mut env_vars = HashMap::new();
env_vars.insert("POSTGRES_DB".to_string(), "myapp".to_string());
env_vars.insert("POSTGRES_USER".to_string(), "postgres".to_string());
env_vars.insert(
"POSTGRES_PASSWORD".to_string(),
"secretpassword".to_string(),
);
env_vars.insert(
"PGDATA".to_string(),
"/var/lib/postgresql/data/pgdata".to_string(),
);
// Deploy the PostgreSQL cluster using the convenience method
println!("Deploying PostgreSQL cluster...");
km.deploy_application(
"postgres-cluster", // name
"postgres:15", // image
2, // replicas (1 master + 1 replica)
5432, // port
Some(labels), // labels
Some(env_vars), // environment variables
)
.await?;
println!("✅ PostgreSQL cluster deployed successfully!");
// Check deployment status
let deployments = km.deployments_list().await?;
let postgres_deployment = deployments
.iter()
.find(|d| d.metadata.name.as_ref() == Some(&"postgres-cluster".to_string()));
if let Some(deployment) = postgres_deployment {
let total_replicas = deployment
.spec
.as_ref()
.and_then(|s| s.replicas)
.unwrap_or(0);
let ready_replicas = deployment
.status
.as_ref()
.and_then(|s| s.ready_replicas)
.unwrap_or(0);
println!(
"Deployment status: {}/{} replicas ready",
ready_replicas, total_replicas
);
}
println!("\n📋 Connection Information:");
println!(" Host: postgres-cluster.database.svc.cluster.local");
println!(" Port: 5432");
println!(" Database: postgres (default)");
println!(" Username: postgres (default)");
println!(" Password: Set POSTGRES_PASSWORD environment variable");
println!("\n🔧 To connect from another pod:");
println!(" psql -h postgres-cluster.database.svc.cluster.local -U postgres");
println!("\n💡 Next steps:");
println!(" • Set environment variables for database credentials");
println!(" • Add persistent volume claims for data storage");
println!(" • Configure backup and monitoring");
Ok(())
}

View File

@@ -0,0 +1,79 @@
//! Redis Cluster Deployment Example (Rhai)
//!
//! This script shows how to deploy a Redis cluster using Rhai scripting
//! with the KubernetesManager convenience methods.
print("=== Redis Cluster Deployment ===");
// Create Kubernetes manager for the cache namespace
print("Creating Kubernetes manager for 'cache' namespace...");
let km = kubernetes_manager_new("cache");
print("✓ Kubernetes manager created");
// Create the namespace if it doesn't exist
print("Creating namespace 'cache' if it doesn't exist...");
try {
create_namespace(km, "cache");
print("✓ Namespace 'cache' created");
} catch(e) {
if e.to_string().contains("already exists") {
print("✓ Namespace 'cache' already exists");
} else {
print("⚠️ Warning: " + e);
}
}
// Clean up any existing resources first
print("\nCleaning up any existing Redis resources...");
try {
delete_deployment(km, "redis-cluster");
print("✓ Deleted existing deployment");
} catch(e) {
print("✓ No existing deployment to delete");
}
try {
delete_service(km, "redis-cluster");
print("✓ Deleted existing service");
} catch(e) {
print("✓ No existing service to delete");
}
// Create Redis cluster using the convenience method
print("\nDeploying Redis cluster...");
try {
// Deploy Redis using the convenience method
let result = deploy_application(km, "redis-cluster", "redis:7-alpine", 3, 6379, #{
"app": "redis-cluster",
"type": "cache",
"engine": "redis"
}, #{
"REDIS_PASSWORD": "redispassword",
"REDIS_PORT": "6379",
"REDIS_DATABASES": "16",
"REDIS_MAXMEMORY": "256mb",
"REDIS_MAXMEMORY_POLICY": "allkeys-lru"
});
print("✓ " + result);
print("\n✅ Redis cluster deployed successfully!");
print("\n📋 Connection Information:");
print(" Host: redis-cluster.cache.svc.cluster.local");
print(" Port: 6379");
print("\n🔧 To connect from another pod:");
print(" redis-cli -h redis-cluster.cache.svc.cluster.local");
print("\n💡 Next steps:");
print(" • Configure Redis authentication");
print(" • Set up Redis clustering configuration");
print(" • Add persistent storage");
print(" • Configure memory policies");
} catch(e) {
print("❌ Failed to deploy Redis cluster: " + e);
}
print("\n=== Deployment Complete ===");

View File

@@ -0,0 +1,109 @@
//! Redis Cluster Deployment Example
//!
//! This example shows how to deploy a Redis cluster using the
//! KubernetesManager convenience methods.
use sal_kubernetes::KubernetesManager;
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create Kubernetes manager for the cache namespace
let km = KubernetesManager::new("cache").await?;
// Create the namespace if it doesn't exist
println!("Creating namespace 'cache' if it doesn't exist...");
match km.namespace_create("cache").await {
Ok(_) => println!("✓ Namespace 'cache' created"),
Err(e) => {
if e.to_string().contains("already exists") {
println!("✓ Namespace 'cache' already exists");
} else {
return Err(e.into());
}
}
}
// Clean up any existing resources first
println!("Cleaning up any existing Redis resources...");
match km.deployment_delete("redis-cluster").await {
Ok(_) => println!("✓ Deleted existing deployment"),
Err(_) => println!("✓ No existing deployment to delete"),
}
match km.service_delete("redis-cluster").await {
Ok(_) => println!("✓ Deleted existing service"),
Err(_) => println!("✓ No existing service to delete"),
}
// Configure Redis-specific labels
let mut labels = HashMap::new();
labels.insert("app".to_string(), "redis-cluster".to_string());
labels.insert("type".to_string(), "cache".to_string());
labels.insert("engine".to_string(), "redis".to_string());
// Configure Redis environment variables
let mut env_vars = HashMap::new();
env_vars.insert("REDIS_PASSWORD".to_string(), "redispassword".to_string());
env_vars.insert("REDIS_PORT".to_string(), "6379".to_string());
env_vars.insert("REDIS_DATABASES".to_string(), "16".to_string());
env_vars.insert("REDIS_MAXMEMORY".to_string(), "256mb".to_string());
env_vars.insert(
"REDIS_MAXMEMORY_POLICY".to_string(),
"allkeys-lru".to_string(),
);
// Deploy the Redis cluster using the convenience method
println!("Deploying Redis cluster...");
km.deploy_application(
"redis-cluster", // name
"redis:7-alpine", // image
3, // replicas (Redis cluster nodes)
6379, // port
Some(labels), // labels
Some(env_vars), // environment variables
)
.await?;
println!("✅ Redis cluster deployed successfully!");
// Check deployment status
let deployments = km.deployments_list().await?;
let redis_deployment = deployments
.iter()
.find(|d| d.metadata.name.as_ref() == Some(&"redis-cluster".to_string()));
if let Some(deployment) = redis_deployment {
let total_replicas = deployment
.spec
.as_ref()
.and_then(|s| s.replicas)
.unwrap_or(0);
let ready_replicas = deployment
.status
.as_ref()
.and_then(|s| s.ready_replicas)
.unwrap_or(0);
println!(
"Deployment status: {}/{} replicas ready",
ready_replicas, total_replicas
);
}
println!("\n📋 Connection Information:");
println!(" Host: redis-cluster.cache.svc.cluster.local");
println!(" Port: 6379");
println!(" Password: Configure REDIS_PASSWORD environment variable");
println!("\n🔧 To connect from another pod:");
println!(" redis-cli -h redis-cluster.cache.svc.cluster.local");
println!("\n💡 Next steps:");
println!(" • Configure Redis authentication with environment variables");
println!(" • Set up Redis clustering configuration");
println!(" • Add persistent volume claims for data persistence");
println!(" • Configure memory limits and eviction policies");
Ok(())
}

View File

@@ -1,6 +1,7 @@
// Example of using the network modules in SAL through Rhai
// Shows TCP port checking, HTTP URL validation, and SSH command execution
// Function to print section header
fn section(title) {
print("\n");
@@ -19,14 +20,14 @@ let host = "localhost";
let port = 22;
print(`Checking if port ${port} is open on ${host}...`);
let is_open = tcp.check_port(host, port);
print(`Port ${port} is ${is_open ? "open" : "closed"}`);
print(`Port ${port} is ${if is_open { "open" } else { "closed" }}`);
// Check multiple ports
let ports = [22, 80, 443];
print(`Checking multiple ports on ${host}...`);
let port_results = tcp.check_ports(host, ports);
for result in port_results {
print(`Port ${result.port} is ${result.is_open ? "open" : "closed"}`);
print(`Port ${result.port} is ${if result.is_open { "open" } else { "closed" }}`);
}
// HTTP connectivity checks
@@ -39,7 +40,7 @@ let http = net::new_http_connector();
let url = "https://www.example.com";
print(`Checking if ${url} is reachable...`);
let is_reachable = http.check_url(url);
print(`${url} is ${is_reachable ? "reachable" : "unreachable"}`);
print(`${url} is ${if is_reachable { "reachable" } else { "unreachable" }}`);
// Check the status code of a URL
print(`Checking status code of ${url}...`);
@@ -68,7 +69,7 @@ if is_open {
let ssh = net::new_ssh_builder()
.host("localhost")
.port(22)
.user(os::get_env("USER") || "root")
.user(if os::get_env("USER") != () { os::get_env("USER") } else { "root" })
.timeout(10)
.build();

View File

@@ -1,7 +1,7 @@
print("Running a basic command using run().do()...");
print("Running a basic command using run().execute()...");
// Execute a simple command
let result = run("echo Hello from run_basic!").do();
let result = run("echo Hello from run_basic!").execute();
// Print the command result
print(`Command: echo Hello from run_basic!`);
@@ -13,6 +13,6 @@ print(`Stderr:\n${result.stderr}`);
// Example of a command that might fail (if 'nonexistent_command' doesn't exist)
// This will halt execution by default because ignore_error() is not used.
// print("Running a command that will fail (and should halt)...");
// let fail_result = run("nonexistent_command").do(); // This line will cause the script to halt if the command doesn't exist
// let fail_result = run("nonexistent_command").execute(); // This line will cause the script to halt if the command doesn't exist
print("Basic run() example finished.");

View File

@@ -2,7 +2,7 @@ print("Running a command that will fail, but ignoring the error...");
// Run a command that exits with a non-zero code (will fail)
// Using .ignore_error() prevents the script from halting
let result = run("exit 1").ignore_error().do();
let result = run("exit 1").ignore_error().execute();
print(`Command finished.`);
print(`Success: ${result.success}`); // This should be false
@@ -22,7 +22,7 @@ print("\nScript continued execution after the potentially failing command.");
// Example of a command that might fail due to OS error (e.g., command not found)
// This *might* still halt depending on how the underlying Rust function handles it,
// as ignore_error() primarily prevents halting on *command* non-zero exit codes.
// let os_error_result = run("nonexistent_command_123").ignore_error().do();
// let os_error_result = run("nonexistent_command_123").ignore_error().execute();
// print(`OS Error Command Success: ${os_error_result.success}`);
// print(`OS Error Command Exit Code: ${os_error_result.code}`);

View File

@@ -1,4 +1,4 @@
print("Running a command using run().log().do()...");
print("Running a command using run().log().execute()...");
// The .log() method will print the command string to the console before execution.
// This is useful for debugging or tracing which commands are being run.

View File

@@ -1,8 +1,8 @@
print("Running a command using run().silent().do()...\n");
print("Running a command using run().silent().execute()...\n");
// This command will print to standard output and standard error
// However, because .silent() is used, the output will not appear in the console directly
let result = run("echo 'This should be silent stdout.'; echo 'This should be silent stderr.' >&2; exit 0").silent().do();
let result = run("echo 'This should be silent stdout.'; echo 'This should be silent stderr.' >&2; exit 0").silent().execute();
// The output is still captured in the CommandResult
print(`Command finished.`);
@@ -12,7 +12,7 @@ print(`Captured Stdout:\\n${result.stdout}`);
print(`Captured Stderr:\\n${result.stderr}`);
// Example of a silent command that fails (but won't halt because we only suppress output)
// let fail_result = run("echo 'This is silent failure stderr.' >&2; exit 1").silent().do();
// let fail_result = run("echo 'This is silent failure stderr.' >&2; exit 1").silent().execute();
// print(`Failed command finished (silent):`);
// print(`Success: ${fail_result.success}`);
// print(`Exit Code: ${fail_result.code}`);

View File

@@ -1,13 +1,17 @@
// Basic Service Manager Usage Example
//
// This example demonstrates the basic API of the service manager.
// It works on both macOS (launchctl) and Linux (zinit).
// It works on both macOS (launchctl) and Linux (zinit/systemd).
//
// Prerequisites:
//
// Linux: Make sure zinit is running:
// Linux: The service manager will automatically discover running zinit servers
// or fall back to systemd. To use zinit, start it with:
// zinit -s /tmp/zinit.sock init
//
// You can also specify a custom socket path:
// export ZINIT_SOCKET_PATH=/your/custom/path/zinit.sock
//
// macOS: No additional setup required (uses launchctl).
//
// Usage:

View File

@@ -3,7 +3,7 @@
//! This library loads the Rhai engine, registers all SAL modules,
//! and executes Rhai scripts from a specified directory in sorted order.
use rhai::Engine;
use rhai::{Engine, Scope};
use std::error::Error;
use std::fs;
use std::path::{Path, PathBuf};
@@ -30,6 +30,19 @@ pub fn run(script_path: &str) -> Result<(), Box<dyn Error>> {
// Create a new Rhai engine
let mut engine = Engine::new();
// TODO: if we create a scope here we could clean up all the different functionsand types regsitered wit the engine
// We should generalize the way we add things to the scope for each module sepeartely
let mut scope = Scope::new();
// Conditionally add Hetzner client only when env config is present
if let Ok(cfg) = sal::hetzner::config::Config::from_env() {
let hetzner_client = sal::hetzner::api::Client::new(cfg);
scope.push("hetzner", hetzner_client);
}
// This makes it easy to call e.g. `hetzner.get_server()` or `mycelium.get_connected_peers()`
// --> without the need of manually created a client for each one first
// --> could be conditionally compiled to only use those who we need (we only push the things to the scope that we actually need to run the script)
// Register println function for output
engine.register_fn("println", |s: &str| println!("{}", s));
@@ -78,19 +91,20 @@ pub fn run(script_path: &str) -> Result<(), Box<dyn Error>> {
let script = fs::read_to_string(&script_file)?;
// Execute the script
match engine.eval::<rhai::Dynamic>(&script) {
Ok(result) => {
println!("Script executed successfully");
if !result.is_unit() {
println!("Result: {}", result);
}
}
Err(err) => {
eprintln!("Error executing script: {}", err);
// Exit with error code when a script fails
process::exit(1);
}
}
// match engine.eval::<rhai::Dynamic>(&script) {
// Ok(result) => {
// println!("Script executed successfully");
// if !result.is_unit() {
// println!("Result: {}", result);
// }
// }
// Err(err) => {
// eprintln!("Error executing script: {}", err);
// // Exit with error code when a script fails
// process::exit(1);
// }
// }
engine.run_with_scope(&mut scope, &script)?;
}
println!("\nAll scripts executed successfully!");

View File

@@ -1,227 +0,0 @@
# SAL Kubernetes (`sal-kubernetes`)
Kubernetes cluster management and operations for the System Abstraction Layer (SAL).
## Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
sal-kubernetes = "0.1.0"
```
## ⚠️ **IMPORTANT SECURITY NOTICE**
**This package includes destructive operations that can permanently delete Kubernetes resources!**
- The `delete(pattern)` function uses PCRE regex patterns to bulk delete resources
- **Always test patterns in a safe environment first**
- Use specific patterns to avoid accidental deletion of critical resources
- Consider the impact on dependent resources before deletion
- **No confirmation prompts** - deletions are immediate and irreversible
## Overview
This package provides a high-level interface for managing Kubernetes clusters using the `kube-rs` SDK. It focuses on namespace-scoped operations through the `KubernetesManager` factory pattern.
### Production Safety Features
- **Configurable Timeouts**: All operations have configurable timeouts to prevent hanging
- **Exponential Backoff Retry**: Automatic retry logic for transient failures
- **Rate Limiting**: Built-in rate limiting to prevent API overload
- **Comprehensive Error Handling**: Detailed error types and proper error propagation
- **Structured Logging**: Production-ready logging for monitoring and debugging
## Features
- **Namespace-scoped Management**: Each `KubernetesManager` instance operates on a single namespace
- **Pod Management**: List, create, and manage pods
- **Pattern-based Deletion**: Delete resources using PCRE pattern matching
- **Namespace Operations**: Create and manage namespaces (idempotent operations)
- **Resource Management**: Support for pods, services, deployments, configmaps, secrets, and more
- **Rhai Integration**: Full scripting support through Rhai wrappers
## Usage
### Basic Operations
```rust
use sal_kubernetes::KubernetesManager;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a manager for the "default" namespace
let km = KubernetesManager::new("default").await?;
// List all pods in the namespace
let pods = km.pods_list().await?;
println!("Found {} pods", pods.len());
// Create a namespace (no error if it already exists)
km.namespace_create("my-namespace").await?;
// Delete resources matching a pattern
km.delete("test-.*").await?;
Ok(())
}
```
### Rhai Scripting
```javascript
// Create Kubernetes manager for namespace
let km = kubernetes_manager_new("default");
// List pods
let pods = pods_list(km);
print("Found " + pods.len() + " pods");
// Create namespace
namespace_create(km, "my-app");
// Delete test resources
delete(km, "test-.*");
```
## Dependencies
- `kube`: Kubernetes client library
- `k8s-openapi`: Kubernetes API types
- `tokio`: Async runtime
- `regex`: Pattern matching for resource deletion
- `rhai`: Scripting integration (optional)
## Configuration
### Kubernetes Authentication
The package uses the standard Kubernetes configuration methods:
- In-cluster configuration (when running in a pod)
- Kubeconfig file (`~/.kube/config` or `KUBECONFIG` environment variable)
- Service account tokens
### Production Safety Configuration
```rust
use sal_kubernetes::{KubernetesManager, KubernetesConfig};
use std::time::Duration;
// Create with custom configuration
let config = KubernetesConfig::new()
.with_timeout(Duration::from_secs(60))
.with_retries(5, Duration::from_secs(1), Duration::from_secs(30))
.with_rate_limit(20, 50);
let km = KubernetesManager::with_config("my-namespace", config).await?;
```
### Pre-configured Profiles
```rust
// High-throughput environment
let config = KubernetesConfig::high_throughput();
// Low-latency environment
let config = KubernetesConfig::low_latency();
// Development/testing
let config = KubernetesConfig::development();
```
## Error Handling
All operations return `Result<T, KubernetesError>` with comprehensive error types for different failure scenarios including API errors, configuration issues, and permission problems.
## API Reference
### KubernetesManager
The main interface for Kubernetes operations. Each instance is scoped to a single namespace.
#### Constructor
- `KubernetesManager::new(namespace)` - Create a manager for the specified namespace
#### Resource Listing
- `pods_list()` - List all pods in the namespace
- `services_list()` - List all services in the namespace
- `deployments_list()` - List all deployments in the namespace
- `configmaps_list()` - List all configmaps in the namespace
- `secrets_list()` - List all secrets in the namespace
#### Resource Management
- `pod_get(name)` - Get a specific pod by name
- `service_get(name)` - Get a specific service by name
- `deployment_get(name)` - Get a specific deployment by name
- `pod_delete(name)` - Delete a specific pod by name
- `service_delete(name)` - Delete a specific service by name
- `deployment_delete(name)` - Delete a specific deployment by name
#### Pattern-based Operations
- `delete(pattern)` - Delete all resources matching a PCRE pattern
#### Namespace Operations
- `namespace_create(name)` - Create a namespace (idempotent)
- `namespace_exists(name)` - Check if a namespace exists
- `namespaces_list()` - List all namespaces (cluster-wide)
#### Utility Functions
- `resource_counts()` - Get counts of all resource types in the namespace
- `namespace()` - Get the namespace this manager operates on
### Rhai Functions
When using the Rhai integration, the following functions are available:
- `kubernetes_manager_new(namespace)` - Create a KubernetesManager
- `pods_list(km)` - List pods
- `services_list(km)` - List services
- `deployments_list(km)` - List deployments
- `namespaces_list(km)` - List all namespaces
- `delete(km, pattern)` - Delete resources matching pattern
- `namespace_create(km, name)` - Create namespace
- `namespace_exists(km, name)` - Check namespace existence
- `resource_counts(km)` - Get resource counts
- `pod_delete(km, name)` - Delete specific pod
- `service_delete(km, name)` - Delete specific service
- `deployment_delete(km, name)` - Delete specific deployment
- `namespace(km)` - Get manager's namespace
## Examples
The `examples/kubernetes/` directory contains comprehensive examples:
- `basic_operations.rhai` - Basic listing and counting operations
- `namespace_management.rhai` - Creating and managing namespaces
- `pattern_deletion.rhai` - Using PCRE patterns for bulk deletion
- `multi_namespace_operations.rhai` - Working across multiple namespaces
## Testing
Run tests with:
```bash
# Unit tests (no cluster required)
cargo test --package sal-kubernetes
# Integration tests (requires cluster)
KUBERNETES_TEST_ENABLED=1 cargo test --package sal-kubernetes
# Rhai integration tests
KUBERNETES_TEST_ENABLED=1 cargo test --package sal-kubernetes --features rhai
```
## Security Considerations
- Always use specific PCRE patterns to avoid accidental deletion of important resources
- Test deletion patterns in a safe environment first
- Ensure proper RBAC permissions are configured
- Be cautious with cluster-wide operations like namespace listing
- Consider using dry-run approaches when possible

View File

@@ -0,0 +1,12 @@
[package]
name = "sal-hetzner"
version = "0.1.0"
edition = "2024"
[dependencies]
prettytable = "0.10.0"
reqwest.workspace = true
rhai = { workspace = true, features = ["serde"] }
serde = { workspace = true, features = ["derive"] }
serde_json.workspace = true
thiserror.workspace = true

View File

@@ -0,0 +1,54 @@
use std::fmt;
use serde::Deserialize;
use thiserror::Error;
#[derive(Debug, Error)]
pub enum AppError {
#[error("Request failed: {0}")]
RequestError(#[from] reqwest::Error),
#[error("API error: {0}")]
ApiError(ApiError),
#[error("Deserialization Error: {0:?}")]
SerdeJsonError(#[from] serde_json::Error),
}
#[derive(Debug, Deserialize)]
pub struct ApiError {
pub status: u16,
pub message: String,
}
impl From<reqwest::blocking::Response> for ApiError {
fn from(value: reqwest::blocking::Response) -> Self {
ApiError {
status: value.status().into(),
message: value.text().unwrap_or("The API call returned an error.".to_string()),
}
}
}
impl fmt::Display for ApiError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
#[derive(Deserialize)]
struct HetznerApiError {
code: String,
message: String,
}
#[derive(Deserialize)]
struct HetznerApiErrorWrapper {
error: HetznerApiError,
}
if let Ok(wrapper) = serde_json::from_str::<HetznerApiErrorWrapper>(&self.message) {
write!(
f,
"Status: {}, Code: {}, Message: {}",
self.status, wrapper.error.code, wrapper.error.message
)
} else {
write!(f, "Status: {}: {}", self.status, self.message)
}
}
}

View File

@@ -0,0 +1,513 @@
pub mod error;
pub mod models;
use self::models::{
Boot, Rescue, Server, SshKey, ServerAddonProduct, ServerAddonProductWrapper,
AuctionServerProduct, AuctionServerProductWrapper, AuctionTransaction,
AuctionTransactionWrapper, BootWrapper, Cancellation, CancellationWrapper,
OrderServerBuilder, OrderServerProduct, OrderServerProductWrapper, RescueWrapped,
ServerWrapper, SshKeyWrapper, Transaction, TransactionWrapper,
ServerAddonTransaction, ServerAddonTransactionWrapper,
OrderServerAddonBuilder,
};
use crate::api::error::ApiError;
use crate::config::Config;
use error::AppError;
use reqwest::blocking::Client as HttpClient;
use serde_json::json;
#[derive(Clone)]
pub struct Client {
http_client: HttpClient,
config: Config,
}
impl Client {
pub fn new(config: Config) -> Self {
Self {
http_client: HttpClient::new(),
config,
}
}
fn handle_response<T>(&self, response: reqwest::blocking::Response) -> Result<T, AppError>
where
T: serde::de::DeserializeOwned,
{
let status = response.status();
let body = response.text()?;
if status.is_success() {
serde_json::from_str::<T>(&body).map_err(Into::into)
} else {
Err(AppError::ApiError(ApiError {
status: status.as_u16(),
message: body,
}))
}
}
pub fn get_server(&self, server_number: i32) -> Result<Server, AppError> {
let response = self
.http_client
.get(format!("{}/server/{}", self.config.api_url, server_number))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: ServerWrapper = self.handle_response(response)?;
Ok(wrapped.server)
}
pub fn get_servers(&self) -> Result<Vec<Server>, AppError> {
let response = self
.http_client
.get(format!("{}/server", self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<ServerWrapper> = self.handle_response(response)?;
let servers = wrapped.into_iter().map(|sw| sw.server).collect();
Ok(servers)
}
pub fn update_server_name(&self, server_number: i32, name: &str) -> Result<Server, AppError> {
let params = [("server_name", name)];
let response = self
.http_client
.post(format!("{}/server/{}", self.config.api_url, server_number))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: ServerWrapper = self.handle_response(response)?;
Ok(wrapped.server)
}
pub fn get_cancellation_data(&self, server_number: i32) -> Result<Cancellation, AppError> {
let response = self
.http_client
.get(format!(
"{}/server/{}/cancellation",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: CancellationWrapper = self.handle_response(response)?;
Ok(wrapped.cancellation)
}
pub fn cancel_server(
&self,
server_number: i32,
cancellation_date: &str,
) -> Result<Cancellation, AppError> {
let params = [("cancellation_date", cancellation_date)];
let response = self
.http_client
.post(format!(
"{}/server/{}/cancellation",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: CancellationWrapper = self.handle_response(response)?;
Ok(wrapped.cancellation)
}
pub fn withdraw_cancellation(&self, server_number: i32) -> Result<(), AppError> {
self.http_client
.delete(format!(
"{}/server/{}/cancellation",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
Ok(())
}
pub fn get_ssh_keys(&self) -> Result<Vec<SshKey>, AppError> {
let response = self
.http_client
.get(format!("{}/key", self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<SshKeyWrapper> = self.handle_response(response)?;
let keys = wrapped.into_iter().map(|sk| sk.key).collect();
Ok(keys)
}
pub fn get_ssh_key(&self, fingerprint: &str) -> Result<SshKey, AppError> {
let response = self
.http_client
.get(format!("{}/key/{}", self.config.api_url, fingerprint))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: SshKeyWrapper = self.handle_response(response)?;
Ok(wrapped.key)
}
pub fn add_ssh_key(&self, name: &str, data: &str) -> Result<SshKey, AppError> {
let params = [("name", name), ("data", data)];
let response = self
.http_client
.post(format!("{}/key", self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: SshKeyWrapper = self.handle_response(response)?;
Ok(wrapped.key)
}
pub fn update_ssh_key_name(&self, fingerprint: &str, name: &str) -> Result<SshKey, AppError> {
let params = [("name", name)];
let response = self
.http_client
.post(format!("{}/key/{}", self.config.api_url, fingerprint))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: SshKeyWrapper = self.handle_response(response)?;
Ok(wrapped.key)
}
pub fn delete_ssh_key(&self, fingerprint: &str) -> Result<(), AppError> {
self.http_client
.delete(format!("{}/key/{}", self.config.api_url, fingerprint))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
Ok(())
}
pub fn get_boot_configuration(&self, server_number: i32) -> Result<Boot, AppError> {
let response = self
.http_client
.get(format!("{}/boot/{}", self.config.api_url, server_number))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: BootWrapper = self.handle_response(response)?;
Ok(wrapped.boot)
}
pub fn get_rescue_boot_configuration(&self, server_number: i32) -> Result<Rescue, AppError> {
let response = self
.http_client
.get(format!(
"{}/boot/{}/rescue",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: RescueWrapped = self.handle_response(response)?;
Ok(wrapped.rescue)
}
pub fn enable_rescue_mode(
&self,
server_number: i32,
os: &str,
authorized_keys: Option<&[String]>,
) -> Result<Rescue, AppError> {
let mut params = vec![("os", os)];
if let Some(keys) = authorized_keys {
for key in keys {
params.push(("authorized_key[]", key));
}
}
let response = self
.http_client
.post(format!(
"{}/boot/{}/rescue",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: RescueWrapped = self.handle_response(response)?;
Ok(wrapped.rescue)
}
pub fn disable_rescue_mode(&self, server_number: i32) -> Result<Rescue, AppError> {
let response = self
.http_client
.delete(format!(
"{}/boot/{}/rescue",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: RescueWrapped = self.handle_response(response)?;
Ok(wrapped.rescue)
}
pub fn get_server_products(
&self,
) -> Result<Vec<OrderServerProduct>, AppError> {
let response = self
.http_client
.get(format!("{}/order/server/product", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<OrderServerProductWrapper> = self.handle_response(response)?;
let products = wrapped.into_iter().map(|sop| sop.product).collect();
Ok(products)
}
pub fn get_server_product_by_id(
&self,
product_id: &str,
) -> Result<OrderServerProduct, AppError> {
let response = self
.http_client
.get(format!(
"{}/order/server/product/{}",
&self.config.api_url, product_id
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: OrderServerProductWrapper = self.handle_response(response)?;
Ok(wrapped.product)
}
pub fn order_server(&self, order: OrderServerBuilder) -> Result<Transaction, AppError> {
let mut params = json!({
"product_id": order.product_id,
"dist": order.dist,
"location": order.location,
"authorized_key": order.authorized_keys.unwrap_or_default(),
});
if let Some(addons) = order.addons {
params["addon"] = json!(addons);
}
if let Some(test) = order.test {
if test {
params["test"] = json!(test);
}
}
let response = self
.http_client
.post(format!("{}/order/server/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.json(&params)
.send()?;
let wrapped: TransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
pub fn get_transaction_by_id(&self, transaction_id: &str) -> Result<Transaction, AppError> {
let response = self
.http_client
.get(format!(
"{}/order/server/transaction/{}",
&self.config.api_url, transaction_id
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: TransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
pub fn get_transactions(&self) -> Result<Vec<Transaction>, AppError> {
let response = self
.http_client
.get(format!("{}/order/server/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<TransactionWrapper> = self.handle_response(response)?;
let transactions = wrapped.into_iter().map(|t| t.transaction).collect();
Ok(transactions)
}
pub fn get_auction_server_products(&self) -> Result<Vec<AuctionServerProduct>, AppError> {
let response = self
.http_client
.get(format!(
"{}/order/server_market/product",
&self.config.api_url
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<AuctionServerProductWrapper> = self.handle_response(response)?;
let products = wrapped.into_iter().map(|asp| asp.product).collect();
Ok(products)
}
pub fn get_auction_server_product_by_id(&self, product_id: &str) -> Result<AuctionServerProduct, AppError> {
let response = self
.http_client
.get(format!("{}/order/server_market/product/{}", &self.config.api_url, product_id))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: AuctionServerProductWrapper = self.handle_response(response)?;
Ok(wrapped.product)
}
pub fn get_auction_transactions(&self) -> Result<Vec<AuctionTransaction>, AppError> {
let response = self
.http_client
.get(format!("{}/order/server_market/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<AuctionTransactionWrapper> = self.handle_response(response)?;
let transactions = wrapped.into_iter().map(|t| t.transaction).collect();
Ok(transactions)
}
pub fn get_auction_transaction_by_id(&self, transaction_id: &str) -> Result<AuctionTransaction, AppError> {
let response = self
.http_client
.get(format!("{}/order/server_market/transaction/{}", &self.config.api_url, transaction_id))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: AuctionTransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
pub fn get_server_addon_products(
&self,
server_number: i64,
) -> Result<Vec<ServerAddonProduct>, AppError> {
let response = self
.http_client
.get(format!(
"{}/order/server_addon/{}/product",
&self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<ServerAddonProductWrapper> = self.handle_response(response)?;
let products = wrapped.into_iter().map(|sap| sap.product).collect();
Ok(products)
}
pub fn order_auction_server(
&self,
product_id: i64,
authorized_keys: Vec<String>,
dist: Option<String>,
arch: Option<String>,
lang: Option<String>,
comment: Option<String>,
addons: Option<Vec<String>>,
test: Option<bool>,
) -> Result<AuctionTransaction, AppError> {
let mut params: Vec<(&str, String)> = Vec::new();
params.push(("product_id", product_id.to_string()));
for key in &authorized_keys {
params.push(("authorized_key[]", key.clone()));
}
if let Some(dist) = dist {
params.push(("dist", dist));
}
if let Some(arch) = arch {
params.push(("@deprecated arch", arch));
}
if let Some(lang) = lang {
params.push(("lang", lang));
}
if let Some(comment) = comment {
params.push(("comment", comment));
}
if let Some(addons) = addons {
for addon in addons {
params.push(("addon[]", addon));
}
}
if let Some(test) = test {
params.push(("test", test.to_string()));
}
let response = self
.http_client
.post(format!("{}/order/server_market/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: AuctionTransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
pub fn get_server_addon_transactions(&self) -> Result<Vec<ServerAddonTransaction>, AppError> {
let response = self
.http_client
.get(format!("{}/order/server_addon/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<ServerAddonTransactionWrapper> = self.handle_response(response)?;
let transactions = wrapped.into_iter().map(|satw| satw.transaction).collect();
Ok(transactions)
}
pub fn get_server_addon_transaction_by_id(
&self,
transaction_id: &str,
) -> Result<ServerAddonTransaction, AppError> {
let response = self
.http_client
.get(format!(
"{}/order/server_addon/transaction/{}",
&self.config.api_url, transaction_id
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: ServerAddonTransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
pub fn order_server_addon(
&self,
order: OrderServerAddonBuilder,
) -> Result<ServerAddonTransaction, AppError> {
let mut params = json!({
"server_number": order.server_number,
"product_id": order.product_id,
});
if let Some(reason) = order.reason {
params["reason"] = json!(reason);
}
if let Some(gateway) = order.gateway {
params["gateway"] = json!(gateway);
}
if let Some(test) = order.test {
if test {
params["test"] = json!(test);
}
}
let response = self
.http_client
.post(format!("{}/order/server_addon/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: ServerAddonTransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,25 @@
use std::env;
#[derive(Clone)]
pub struct Config {
pub username: String,
pub password: String,
pub api_url: String,
}
impl Config {
pub fn from_env() -> Result<Self, String> {
let username = env::var("HETZNER_USERNAME")
.map_err(|_| "HETZNER_USERNAME environment variable not set".to_string())?;
let password = env::var("HETZNER_PASSWORD")
.map_err(|_| "HETZNER_PASSWORD environment variable not set".to_string())?;
let api_url = env::var("HETZNER_API_URL")
.unwrap_or_else(|_| "https://robot-ws.your-server.de".to_string());
Ok(Config {
username,
password,
api_url,
})
}
}

View File

@@ -0,0 +1,3 @@
pub mod api;
pub mod config;
pub mod rhai;

View File

@@ -0,0 +1,63 @@
use crate::api::{
models::{Boot, Rescue},
Client,
};
use rhai::{plugin::*, Engine};
pub fn register(engine: &mut Engine) {
let boot_module = exported_module!(boot_api);
engine.register_global_module(boot_module.into());
}
#[export_module]
pub mod boot_api {
use super::*;
use rhai::EvalAltResult;
#[rhai_fn(name = "get_boot_configuration", return_raw)]
pub fn get_boot_configuration(
client: &mut Client,
server_number: i64,
) -> Result<Boot, Box<EvalAltResult>> {
client
.get_boot_configuration(server_number as i32)
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "get_rescue_boot_configuration", return_raw)]
pub fn get_rescue_boot_configuration(
client: &mut Client,
server_number: i64,
) -> Result<Rescue, Box<EvalAltResult>> {
client
.get_rescue_boot_configuration(server_number as i32)
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "enable_rescue_mode", return_raw)]
pub fn enable_rescue_mode(
client: &mut Client,
server_number: i64,
os: &str,
authorized_keys: rhai::Array,
) -> Result<Rescue, Box<EvalAltResult>> {
let keys: Vec<String> = authorized_keys
.into_iter()
.map(|k| k.into_string().unwrap())
.collect();
client
.enable_rescue_mode(server_number as i32, os, Some(&keys))
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "disable_rescue_mode", return_raw)]
pub fn disable_rescue_mode(
client: &mut Client,
server_number: i64,
) -> Result<Rescue, Box<EvalAltResult>> {
client
.disable_rescue_mode(server_number as i32)
.map_err(|e| e.to_string().into())
}
}

View File

@@ -0,0 +1,54 @@
use rhai::{Engine, EvalAltResult};
use crate::api::models::{
AuctionServerProduct, AuctionTransaction, AuctionTransactionProduct, AuthorizedKey, Boot,
Cancellation, Cpanel, HostKey, Linux, OrderAuctionServerBuilder, OrderServerAddonBuilder,
OrderServerBuilder, OrderServerProduct, Plesk, Rescue, Server, ServerAddonProduct,
ServerAddonResource, ServerAddonTransaction, SshKey, Transaction, TransactionProduct, Vnc,
Windows,
};
pub mod boot;
pub mod printing;
pub mod server;
pub mod server_ordering;
pub mod ssh_keys;
// here just register the hetzner module
pub fn register_hetzner_module(engine: &mut Engine) -> Result<(), Box<EvalAltResult>> {
// TODO:register types
engine.build_type::<Server>();
engine.build_type::<SshKey>();
engine.build_type::<Boot>();
engine.build_type::<Rescue>();
engine.build_type::<Linux>();
engine.build_type::<Vnc>();
engine.build_type::<Windows>();
engine.build_type::<Plesk>();
engine.build_type::<Cpanel>();
engine.build_type::<Cancellation>();
engine.build_type::<OrderServerProduct>();
engine.build_type::<Transaction>();
engine.build_type::<AuthorizedKey>();
engine.build_type::<TransactionProduct>();
engine.build_type::<HostKey>();
engine.build_type::<AuctionServerProduct>();
engine.build_type::<AuctionTransaction>();
engine.build_type::<AuctionTransactionProduct>();
engine.build_type::<OrderAuctionServerBuilder>();
engine.build_type::<OrderServerBuilder>();
engine.build_type::<ServerAddonProduct>();
engine.build_type::<ServerAddonTransaction>();
engine.build_type::<ServerAddonResource>();
engine.build_type::<OrderServerAddonBuilder>();
server::register(engine);
ssh_keys::register(engine);
boot::register(engine);
server_ordering::register(engine);
// TODO: push hetzner to scope as value client:
// scope.push("hetzner", client);
Ok(())
}

View File

@@ -0,0 +1,43 @@
use rhai::{Array, Engine};
use crate::{api::models::{OrderServerProduct, AuctionServerProduct, AuctionTransaction, ServerAddonProduct, ServerAddonTransaction, Server, SshKey}};
mod servers_table;
mod ssh_keys_table;
mod server_ordering_table;
// This will be called when we print(...) or pretty_print() an Array (with Dynamic values)
pub fn pretty_print_dispatch(array: Array) {
if array.is_empty() {
println!("<empty table>");
return;
}
let first = &array[0];
if first.is::<Server>() {
println!("Yeah first is server!");
servers_table::pretty_print_servers(array);
} else if first.is::<SshKey>() {
ssh_keys_table::pretty_print_ssh_keys(array);
}
else if first.is::<OrderServerProduct>() {
server_ordering_table::pretty_print_server_products(array);
} else if first.is::<AuctionServerProduct>() {
server_ordering_table::pretty_print_auction_server_products(array);
} else if first.is::<AuctionTransaction>() {
server_ordering_table::pretty_print_auction_transactions(array);
} else if first.is::<ServerAddonProduct>() {
server_ordering_table::pretty_print_server_addon_products(array);
} else if first.is::<ServerAddonTransaction>() {
server_ordering_table::pretty_print_server_addon_transactions(array);
} else {
// Generic fallback for other types
for item in array {
println!("{}", item.to_string());
}
}
}
pub fn register(engine: &mut Engine) {
engine.register_fn("pretty_print", pretty_print_dispatch);
}

View File

@@ -0,0 +1,293 @@
use prettytable::{row, Table};
use crate::api::models::{OrderServerProduct, ServerAddonProduct, ServerAddonTransaction, ServerAddonResource};
pub fn pretty_print_server_products(products: rhai::Array) {
let mut table = Table::new();
table.add_row(row![b =>
"ID",
"Name",
"Description",
"Traffic",
"Location",
"Price (Net)",
"Price (Gross)",
]);
for product_dyn in products {
if let Some(product) = product_dyn.try_cast::<OrderServerProduct>() {
let mut price_net = "N/A".to_string();
let mut price_gross = "N/A".to_string();
if let Some(first_price) = product.prices.first() {
price_net = first_price.price.net.clone();
price_gross = first_price.price.gross.clone();
}
table.add_row(row![
product.id,
product.name,
product.description.join(", "),
product.traffic,
product.location.join(", "),
price_net,
price_gross,
]);
}
}
table.printstd();
}
pub fn pretty_print_auction_server_products(products: rhai::Array) {
let mut table = Table::new();
table.add_row(row![b =>
"ID",
"Name",
"Description",
"Traffic",
"Distributions",
"Architectures",
"Languages",
"CPU",
"CPU Benchmark",
"Memory Size (GB)",
"HDD Size (GB)",
"HDD Text",
"HDD Count",
"Datacenter",
"Network Speed",
"Price (Net)",
"Price (Hourly Net)",
"Price (Setup Net)",
"Price (VAT)",
"Price (Hourly VAT)",
"Price (Setup VAT)",
"Fixed Price",
"Next Reduce (seconds)",
"Next Reduce Date",
"Orderable Addons",
]);
for product_dyn in products {
if let Some(product) = product_dyn.try_cast::<crate::api::models::AuctionServerProduct>() {
let mut addons_table = Table::new();
addons_table.add_row(row![b => "ID", "Name", "Min", "Max", "Prices"]);
for addon in &product.orderable_addons {
let mut addon_prices_table = Table::new();
addon_prices_table.add_row(row![b => "Location", "Net", "Gross", "Hourly Net", "Hourly Gross", "Setup Net", "Setup Gross"]);
for price in &addon.prices {
addon_prices_table.add_row(row![
price.location,
price.price.net,
price.price.gross,
price.price.hourly_net,
price.price.hourly_gross,
price.price_setup.net,
price.price_setup.gross
]);
}
addons_table.add_row(row![
addon.id,
addon.name,
addon.min,
addon.max,
addon_prices_table
]);
}
table.add_row(row![
product.id,
product.name,
product.description.join(", "),
product.traffic,
product.dist.join(", "),
product.arch.as_deref().unwrap_or_default().join(", "),
product.lang.join(", "),
product.cpu,
product.cpu_benchmark,
product.memory_size,
product.hdd_size,
product.hdd_text,
product.hdd_count,
product.datacenter,
product.network_speed,
product.price,
product.price_hourly.as_deref().unwrap_or("N/A"),
product.price_setup,
product.price_with_vat,
product.price_hourly_with_vat.as_deref().unwrap_or("N/A"),
product.price_setup_with_vat,
product.fixed_price,
product.next_reduce,
product.next_reduce_date,
addons_table,
]);
}
}
table.printstd();
}
pub fn pretty_print_server_addon_products(products: rhai::Array) {
let mut table = Table::new();
table.add_row(row![b =>
"ID",
"Name",
"Type",
"Location",
"Price (Net)",
"Price (Gross)",
"Hourly Net",
"Hourly Gross",
"Setup Net",
"Setup Gross",
]);
for product_dyn in products {
if let Some(product) = product_dyn.try_cast::<ServerAddonProduct>() {
table.add_row(row![
product.id,
product.name,
product.product_type,
product.price.location,
product.price.price.net,
product.price.price.gross,
product.price.price.hourly_net,
product.price.price.hourly_gross,
product.price.price_setup.net,
product.price.price_setup.gross,
]);
}
}
table.printstd();
}
pub fn pretty_print_auction_transactions(transactions: rhai::Array) {
let mut table = Table::new();
table.add_row(row![b =>
"ID",
"Date",
"Status",
"Server Number",
"Server IP",
"Comment",
"Product ID",
"Product Name",
"Product Traffic",
"Product Distributions",
"Product Architectures",
"Product Languages",
"Product CPU",
"Product CPU Benchmark",
"Product Memory Size (GB)",
"Product HDD Size (GB)",
"Product HDD Text",
"Product HDD Count",
"Product Datacenter",
"Product Network Speed",
"Product Fixed Price",
"Product Next Reduce (seconds)",
"Product Next Reduce Date",
"Addons",
]);
for transaction_dyn in transactions {
if let Some(transaction) = transaction_dyn.try_cast::<crate::api::models::AuctionTransaction>() {
let _authorized_keys_table = {
let mut table = Table::new();
table.add_row(row![b => "Name", "Fingerprint", "Type", "Size"]);
for key in &transaction.authorized_key {
table.add_row(row![
key.key.name.as_deref().unwrap_or("N/A"),
key.key.fingerprint.as_deref().unwrap_or("N/A"),
key.key.key_type.as_deref().unwrap_or("N/A"),
key.key.size.map_or("N/A".to_string(), |s| s.to_string())
]);
}
table
};
let _host_keys_table = {
let mut table = Table::new();
table.add_row(row![b => "Fingerprint", "Type", "Size"]);
for key in &transaction.host_key {
table.add_row(row![
key.key.fingerprint.as_deref().unwrap_or("N/A"),
key.key.key_type.as_deref().unwrap_or("N/A"),
key.key.size.map_or("N/A".to_string(), |s| s.to_string())
]);
}
table
};
table.add_row(row![
transaction.id,
transaction.date,
transaction.status,
transaction.server_number.map_or("N/A".to_string(), |id| id.to_string()),
transaction.server_ip.as_deref().unwrap_or("N/A"),
transaction.comment.as_deref().unwrap_or("N/A"),
transaction.product.id,
transaction.product.name,
transaction.product.traffic,
transaction.product.dist,
transaction.product.arch.as_deref().unwrap_or("N/A"),
transaction.product.lang,
transaction.product.cpu,
transaction.product.cpu_benchmark,
transaction.product.memory_size,
transaction.product.hdd_size,
transaction.product.hdd_text,
transaction.product.hdd_count,
transaction.product.datacenter,
transaction.product.network_speed,
transaction.product.fixed_price.unwrap_or_default().to_string(),
transaction
.product
.next_reduce
.map_or("N/A".to_string(), |r| r.to_string()),
transaction
.product
.next_reduce_date
.as_deref()
.unwrap_or("N/A"),
transaction.addons.join(", "),
]);
}
}
table.printstd();
}
pub fn pretty_print_server_addon_transactions(transactions: rhai::Array) {
let mut table = Table::new();
table.add_row(row![b =>
"ID",
"Date",
"Status",
"Server Number",
"Product ID",
"Product Name",
"Product Price",
"Resources",
]);
for transaction_dyn in transactions {
if let Some(transaction) = transaction_dyn.try_cast::<ServerAddonTransaction>() {
let mut resources_table = Table::new();
resources_table.add_row(row![b => "Type", "ID"]);
for resource in &transaction.resources {
resources_table.add_row(row![resource.resource_type, resource.id]);
}
table.add_row(row![
transaction.id,
transaction.date,
transaction.status,
transaction.server_number,
transaction.product.id,
transaction.product.name,
transaction.product.price.to_string(),
resources_table,
]);
}
}
table.printstd();
}

View File

@@ -0,0 +1,30 @@
use prettytable::{row, Table};
use rhai::Array;
use super::Server;
pub fn pretty_print_servers(servers: Array) {
let mut table = Table::new();
table.add_row(row![b =>
"Number",
"Name",
"IP",
"Product",
"DC",
"Status"
]);
for server_dyn in servers {
if let Some(server) = server_dyn.try_cast::<Server>() {
table.add_row(row![
server.server_number.to_string(),
server.server_name,
server.server_ip.unwrap_or("N/A".to_string()),
server.product,
server.dc,
server.status
]);
}
}
table.printstd();
}

View File

@@ -0,0 +1,26 @@
use prettytable::{row, Table};
use super::SshKey;
pub fn pretty_print_ssh_keys(keys: rhai::Array) {
let mut table = Table::new();
table.add_row(row![b =>
"Name",
"Fingerprint",
"Type",
"Size",
"Created At"
]);
for key_dyn in keys {
if let Some(key) = key_dyn.try_cast::<SshKey>() {
table.add_row(row![
key.name,
key.fingerprint,
key.key_type,
key.size.to_string(),
key.created_at
]);
}
}
table.printstd();
}

View File

@@ -0,0 +1,76 @@
use crate::api::{Client, models::Server};
use rhai::{Array, Dynamic, plugin::*};
pub fn register(engine: &mut Engine) {
let server_module = exported_module!(server_api);
engine.register_global_module(server_module.into());
}
#[export_module]
pub mod server_api {
use crate::api::models::Cancellation;
use super::*;
use rhai::EvalAltResult;
#[rhai_fn(name = "get_server", return_raw)]
pub fn get_server(
client: &mut Client,
server_number: i64,
) -> Result<Server, Box<EvalAltResult>> {
client
.get_server(server_number as i32)
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "get_servers", return_raw)]
pub fn get_servers(client: &mut Client) -> Result<Array, Box<EvalAltResult>> {
let servers = client
.get_servers()
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
println!("number of SERVERS we got: {:#?}", servers.len());
Ok(servers.into_iter().map(Dynamic::from).collect())
}
#[rhai_fn(name = "update_server_name", return_raw)]
pub fn update_server_name(
client: &mut Client,
server_number: i64,
name: &str,
) -> Result<Server, Box<EvalAltResult>> {
client
.update_server_name(server_number as i32, name)
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "get_cancellation_data", return_raw)]
pub fn get_cancellation_data(
client: &mut Client,
server_number: i64,
) -> Result<Cancellation, Box<EvalAltResult>> {
client
.get_cancellation_data(server_number as i32)
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "cancel_server", return_raw)]
pub fn cancel_server(
client: &mut Client,
server_number: i64,
cancellation_date: &str,
) -> Result<Cancellation, Box<EvalAltResult>> {
client
.cancel_server(server_number as i32, cancellation_date)
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "withdraw_cancellation", return_raw)]
pub fn withdraw_cancellation(
client: &mut Client,
server_number: i64,
) -> Result<(), Box<EvalAltResult>> {
client
.withdraw_cancellation(server_number as i32)
.map_err(|e| e.to_string().into())
}
}

View File

@@ -0,0 +1,170 @@
use crate::api::{
Client,
models::{
AuctionServerProduct, AuctionTransaction, OrderAuctionServerBuilder, OrderServerBuilder,
OrderServerProduct, ServerAddonProduct, ServerAddonTransaction, Transaction,
},
};
use rhai::{Array, Dynamic, plugin::*};
pub fn register(engine: &mut Engine) {
let server_order_module = exported_module!(server_order_api);
engine.register_global_module(server_order_module.into());
}
#[export_module]
pub mod server_order_api {
use crate::api::models::OrderServerAddonBuilder;
#[rhai_fn(name = "get_server_products", return_raw)]
pub fn get_server_ordering_product_overview(
client: &mut Client,
) -> Result<Array, Box<EvalAltResult>> {
let overview_servers = client
.get_server_products()
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(overview_servers.into_iter().map(Dynamic::from).collect())
}
#[rhai_fn(name = "get_server_product_by_id", return_raw)]
pub fn get_server_ordering_product_by_id(
client: &mut Client,
product_id: &str,
) -> Result<OrderServerProduct, Box<EvalAltResult>> {
let product = client
.get_server_product_by_id(product_id)
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(product)
}
#[rhai_fn(name = "order_server", return_raw)]
pub fn order_server(
client: &mut Client,
order: OrderServerBuilder,
) -> Result<Transaction, Box<EvalAltResult>> {
let transaction = client
.order_server(order)
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(transaction)
}
#[rhai_fn(name = "get_transaction_by_id", return_raw)]
pub fn get_transaction_by_id(
client: &mut Client,
transaction_id: &str,
) -> Result<Transaction, Box<EvalAltResult>> {
let transaction = client
.get_transaction_by_id(transaction_id)
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(transaction)
}
#[rhai_fn(name = "get_transactions", return_raw)]
pub fn get_transactions(client: &mut Client) -> Result<Array, Box<EvalAltResult>> {
let transactions = client
.get_transactions()
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(transactions.into_iter().map(Dynamic::from).collect())
}
#[rhai_fn(name = "get_auction_server_products", return_raw)]
pub fn get_auction_server_products(client: &mut Client) -> Result<Array, Box<EvalAltResult>> {
let products = client
.get_auction_server_products()
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(products.into_iter().map(Dynamic::from).collect())
}
#[rhai_fn(name = "get_auction_server_product_by_id", return_raw)]
pub fn get_auction_server_product_by_id(
client: &mut Client,
product_id: &str,
) -> Result<AuctionServerProduct, Box<EvalAltResult>> {
let product = client
.get_auction_server_product_by_id(product_id)
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(product)
}
#[rhai_fn(name = "get_auction_transactions", return_raw)]
pub fn get_auction_transactions(client: &mut Client) -> Result<Array, Box<EvalAltResult>> {
let transactions = client
.get_auction_transactions()
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(transactions.into_iter().map(Dynamic::from).collect())
}
#[rhai_fn(name = "get_auction_transaction_by_id", return_raw)]
pub fn get_auction_transaction_by_id(
client: &mut Client,
transaction_id: &str,
) -> Result<AuctionTransaction, Box<EvalAltResult>> {
let transaction = client
.get_auction_transaction_by_id(transaction_id)
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(transaction)
}
#[rhai_fn(name = "get_server_addon_products", return_raw)]
pub fn get_server_addon_products(
client: &mut Client,
server_number: i64,
) -> Result<Array, Box<EvalAltResult>> {
let products = client
.get_server_addon_products(server_number)
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(products.into_iter().map(Dynamic::from).collect())
}
#[rhai_fn(name = "get_server_addon_transactions", return_raw)]
pub fn get_server_addon_transactions(
client: &mut Client,
) -> Result<Array, Box<EvalAltResult>> {
let transactions = client
.get_server_addon_transactions()
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(transactions.into_iter().map(Dynamic::from).collect())
}
#[rhai_fn(name = "get_server_addon_transaction_by_id", return_raw)]
pub fn get_server_addon_transaction_by_id(
client: &mut Client,
transaction_id: &str,
) -> Result<ServerAddonTransaction, Box<EvalAltResult>> {
let transaction = client
.get_server_addon_transaction_by_id(transaction_id)
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(transaction)
}
#[rhai_fn(name = "order_auction_server", return_raw)]
pub fn order_auction_server(
client: &mut Client,
order: OrderAuctionServerBuilder,
) -> Result<AuctionTransaction, Box<EvalAltResult>> {
println!("Builder struct being used to order server: {:#?}", order);
let transaction = client.order_auction_server(
order.product_id,
order.authorized_keys.unwrap_or(vec![]),
order.dist,
None,
order.lang,
order.comment,
order.addon,
order.test,
).map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(transaction)
}
#[rhai_fn(name = "order_server_addon", return_raw)]
pub fn order_server_addon(
client: &mut Client,
order: OrderServerAddonBuilder,
) -> Result<ServerAddonTransaction, Box<EvalAltResult>> {
println!("Builder struct being used to order server addon: {:#?}", order);
let transaction = client
.order_server_addon(order)
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(transaction)
}
}

View File

@@ -0,0 +1,89 @@
use crate::api::{Client, models::SshKey};
use prettytable::{Table, row};
use rhai::{Array, Dynamic, Engine, plugin::*};
pub fn register(engine: &mut Engine) {
let ssh_keys_module = exported_module!(ssh_keys_api);
engine.register_global_module(ssh_keys_module.into());
}
#[export_module]
pub mod ssh_keys_api {
use super::*;
use rhai::EvalAltResult;
#[rhai_fn(name = "get_ssh_keys", return_raw)]
pub fn get_ssh_keys(client: &mut Client) -> Result<Array, Box<EvalAltResult>> {
let ssh_keys = client
.get_ssh_keys()
.map_err(|e| Into::<Box<EvalAltResult>>::into(e.to_string()))?;
Ok(ssh_keys.into_iter().map(Dynamic::from).collect())
}
#[rhai_fn(name = "get_ssh_key", return_raw)]
pub fn get_ssh_key(
client: &mut Client,
fingerprint: &str,
) -> Result<SshKey, Box<EvalAltResult>> {
client
.get_ssh_key(fingerprint)
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "add_ssh_key", return_raw)]
pub fn add_ssh_key(
client: &mut Client,
name: &str,
data: &str,
) -> Result<SshKey, Box<EvalAltResult>> {
client
.add_ssh_key(name, data)
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "update_ssh_key_name", return_raw)]
pub fn update_ssh_key_name(
client: &mut Client,
fingerprint: &str,
name: &str,
) -> Result<SshKey, Box<EvalAltResult>> {
client
.update_ssh_key_name(fingerprint, name)
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "delete_ssh_key", return_raw)]
pub fn delete_ssh_key(
client: &mut Client,
fingerprint: &str,
) -> Result<(), Box<EvalAltResult>> {
client
.delete_ssh_key(fingerprint)
.map_err(|e| e.to_string().into())
}
#[rhai_fn(name = "pretty_print")]
pub fn pretty_print_ssh_keys(keys: Array) {
let mut table = Table::new();
table.add_row(row![b =>
"Name",
"Fingerprint",
"Type",
"Size",
"Created At"
]);
for key_dyn in keys {
if let Some(key) = key_dyn.try_cast::<SshKey>() {
table.add_row(row![
key.name,
key.fingerprint,
key.key_type,
key.size.to_string(),
key.created_at
]);
}
}
table.printstd();
}
}

View File

@@ -9,22 +9,22 @@ license = "Apache-2.0"
[dependencies]
# HTTP client for async requests
reqwest = { version = "0.12.15", features = ["json"] }
reqwest = { workspace = true }
# JSON handling
serde_json = "1.0"
serde_json = { workspace = true }
# Base64 encoding/decoding for message payloads
base64 = "0.22.1"
base64 = { workspace = true }
# Async runtime
tokio = { version = "1.45.0", features = ["full"] }
tokio = { workspace = true }
# Rhai scripting support
rhai = { version = "1.12.0", features = ["sync"] }
rhai = { workspace = true }
# Logging
log = "0.4"
log = { workspace = true }
# URL encoding for API parameters
urlencoding = "2.1.3"
urlencoding = { workspace = true }
[dev-dependencies]
# For async testing
tokio-test = "0.4.4"
tokio-test = { workspace = true }
# For temporary files in tests
tempfile = "3.5"
tempfile = { workspace = true }

View File

@@ -11,24 +11,24 @@ categories = ["database", "api-bindings"]
[dependencies]
# PostgreSQL client dependencies
postgres = "0.19.4"
postgres-types = "0.2.5"
tokio-postgres = "0.7.8"
postgres = { workspace = true }
postgres-types = { workspace = true }
tokio-postgres = { workspace = true }
# Connection pooling
r2d2 = "0.8.10"
r2d2_postgres = "0.18.2"
r2d2 = { workspace = true }
r2d2_postgres = { workspace = true }
# Utility dependencies
lazy_static = "1.4.0"
thiserror = "2.0.12"
lazy_static = { workspace = true }
thiserror = { workspace = true }
# Rhai scripting support
rhai = { version = "1.12.0", features = ["sync"] }
rhai = { workspace = true }
# SAL dependencies
sal-virt = { path = "../virt" }
sal-virt = { workspace = true }
[dev-dependencies]
tempfile = "3.5"
tokio-test = "0.4.4"
tempfile = { workspace = true }
tokio-test = { workspace = true }

View File

@@ -11,11 +11,11 @@ categories = ["database", "caching", "api-bindings"]
[dependencies]
# Core Redis functionality
redis = "0.31.0"
lazy_static = "1.4.0"
redis = { workspace = true }
lazy_static = { workspace = true }
# Rhai integration (optional)
rhai = { version = "1.12.0", features = ["sync"], optional = true }
rhai = { workspace = true, optional = true }
[features]
default = ["rhai"]
@@ -23,4 +23,4 @@ rhai = ["dep:rhai"]
[dev-dependencies]
# For testing
tempfile = "3.5"
tempfile = { workspace = true }

View File

@@ -9,20 +9,20 @@ license = "Apache-2.0"
[dependencies]
# Core dependencies
anyhow = "1.0.98"
futures = "0.3.30"
lazy_static = "1.4.0"
log = "0.4"
serde_json = "1.0"
thiserror = "2.0.12"
tokio = { version = "1.45.0", features = ["full"] }
anyhow = { workspace = true }
futures = { workspace = true }
lazy_static = { workspace = true }
log = { workspace = true }
serde_json = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true }
# Zinit client
zinit-client = "0.3.0"
zinit-client = { workspace = true }
# Rhai integration
rhai = { version = "1.12.0", features = ["sync"] }
rhai = { workspace = true }
[dev-dependencies]
tokio-test = "0.4.4"
tempfile = "3.5"
tokio-test = { workspace = true }
tempfile = { workspace = true }

View File

@@ -149,16 +149,20 @@ impl ZinitClientWrapper {
// Get logs with real implementation
pub async fn logs(&self, filter: Option<String>) -> Result<Vec<String>, ZinitError> {
use futures::StreamExt;
use tokio::time::{timeout, Duration};
// The logs method requires a follow parameter and filter
let follow = false; // Don't follow logs, just get existing ones
let mut log_stream = self.client.logs(follow, filter).await?;
let mut logs = Vec::new();
// Collect logs from the stream with a reasonable limit
// Collect logs from the stream with a reasonable limit and timeout
let mut count = 0;
const MAX_LOGS: usize = 1000;
const LOG_TIMEOUT: Duration = Duration::from_secs(5);
// Use timeout to prevent hanging
let result = timeout(LOG_TIMEOUT, async {
while let Some(log_result) = log_stream.next().await {
match log_result {
Ok(log_entry) => {
@@ -175,10 +179,23 @@ impl ZinitClientWrapper {
}
}
}
})
.await;
// Handle timeout - this is not an error, just means no more logs available
match result {
Ok(_) => Ok(logs),
Err(_) => {
log::debug!(
"Log reading timed out after {} seconds, returning {} logs",
LOG_TIMEOUT.as_secs(),
logs.len()
);
Ok(logs)
}
}
}
}
// Get the Zinit client instance
pub async fn get_zinit_client(socket_path: &str) -> Result<Arc<ZinitClientWrapper>, ZinitError> {

Some files were not shown because too many files have changed in this diff Show More