148 Commits

Author SHA1 Message Date
Timur Gordon
9865e601d7 Remove _archive directory causing build conflicts 2025-10-20 22:29:22 +02:00
Timur Gordon
7afa5ea1c0 Merge branch 'development' of https://git.ourworld.tf/herocode/sal into development 2025-09-01 15:57:49 +02:00
Timur Gordon
6c2d96c9a5 service manager impl 2025-09-01 15:54:34 +02:00
Sameh Abouel-saad
b2fc0976bd style: format code and reorganize imports across rfsclient codebase 2025-08-28 03:50:07 +03:00
Sameh Abouel-saad
e114404ca7 fix: refactor rhai functions to use Map parameters 2025-08-28 03:43:00 +03:00
Sameh Abouel-saad
536779f521 fix rfsclient rhai example and update docs 2025-08-27 22:29:25 +03:00
Sameh Abouelsaad
c2969621b1 feat: implement RFS client with authentication and file management APIs 2025-08-27 17:59:20 +03:00
Timur Gordon
b39f24ca8f add install script and compilation fixes 2025-08-27 13:17:52 +02:00
f87a1d7f80 Merge branch 'development' of git.ourworld.tf:herocode/herolib_rust into development 2025-08-25 07:06:52 +02:00
17e5924e0b ... 2025-08-25 07:06:50 +02:00
Maxime Van Hees
768e3e176d fixed overlapping workspace roots 2025-08-21 16:20:15 +02:00
Timur Gordon
aa0248ef17 move rhailib to herolib 2025-08-21 14:32:24 +02:00
Maxime Van Hees
aab2b6f128 fixed cloud hypervisor issues + updated test script (working now) 2025-08-21 13:32:03 +02:00
Maxime Van Hees
d735316b7f cloud-hypervisor SAL + rhai test script for it 2025-08-20 18:01:21 +02:00
Maxime Van Hees
d1c80863b8 fixed test script errors 2025-08-20 15:42:12 +02:00
Maxime Van Hees
169c62da47 Merge branch 'development' of https://git.ourworld.tf/herocode/herolib_rust into development 2025-08-20 14:45:57 +02:00
Maxime Van Hees
33a5f24981 qcow2 SAL + rhai script to test functionality 2025-08-20 14:44:29 +02:00
Timur Gordon
d7562ce466 add data packages and remove empty submodule 2025-08-07 12:13:37 +02:00
ca736d62f3 /// 2025-08-06 03:27:49 +02:00
Maxime Van Hees
078c6f723b merging changes 2025-08-05 20:28:20 +02:00
Maxime Van Hees
9fdb8d8845 integrated hetzner client in repo + showcase of using scope for 'cleaner' scripts 2025-08-05 20:27:14 +02:00
8203a3b1ff Merge branch 'development' of git.ourworld.tf:herocode/herolib_rust into development 2025-08-05 16:39:01 +02:00
1770ac561e ... 2025-08-05 16:39:00 +02:00
Maxime Van Hees
eed6dbf8dc added robot hetzner code to research for later importing it into codebase 2025-08-05 16:32:29 +02:00
4cd4e04028 ... 2025-08-05 16:22:25 +02:00
8cc828fc0e ...... 2025-08-05 16:21:33 +02:00
56af312aad ... 2025-08-05 16:04:55 +02:00
dfd6931c5b ... 2025-08-05 16:00:24 +02:00
6e01f99958 ... 2025-08-05 15:43:13 +02:00
0c02d0e99f ... 2025-08-05 15:33:03 +02:00
7856fc0a4e ... 2025-07-14 13:53:01 +04:00
Mahmoud-Emad
758e59e921 docs: Improve README.md with clearer structure and installation
- Update README.md to provide a clearer structure and improved
  installation instructions.  This makes it easier for users to
  understand and use the library.
- Remove outdated and unnecessary sections like the workspace
  structure details, publishing status, and detailed features
  lists. The information is either not relevant anymore or can be
  found elsewhere.
- Simplify installation instructions to focus on the core aspects
  of installing individual packages or the meta-package with
  features.
- Add a dedicated section for building and running tests,
  improving developer experience and making the process more
  transparent.
- Modernize the overall layout and formatting for better
  readability.
2025-07-13 12:51:08 +03:00
f1806eb788 Merge pull request 'feat: Update SAL Vault examples and documentation' (#24) from development_vault into development
Reviewed-on: herocode/sal#24
2025-07-13 09:31:53 +00:00
Mahmoud-Emad
6e5d9b35e8 feat: Update SAL Vault examples and documentation
- Renamed examples directory to `_archive` to reflect legacy status.
- Updated README.md to reflect current status of vault module,
  including migration from Sameh's implementation to Lee's.
- Temporarily disabled Rhai scripting integration for the vault.
- Added notes regarding current and future development steps.
2025-07-10 14:03:43 +03:00
61f5331804 Merge pull request 'feat: Update zinit-client dependency to 0.4.0' (#23) from development_service_manager into development
Reviewed-on: herocode/sal#23
2025-07-10 08:29:07 +00:00
Mahmoud-Emad
423b7bfa7e feat: Update zinit-client dependency to 0.4.0
- Upgrade `zinit-client` dependency to version 0.4.0 across all
  relevant crates. This resolves potential compatibility issues
  and incorporates bug fixes and improvements from the latest
  release.

- Improve error handling and logging in `zinit-client` and
  `service_manager` to provide more informative feedback and
  prevent potential hangs during log retrieval.  Add timeout to
  prevent indefinite blocking on log retrieval.

- Update `publish-all.sh` script to correctly handle the
  `service_manager` crate during publishing.  Improves handling of
  special cases in the publishing script.

- Add `zinit-client.workspace = true` to `Cargo.toml` to ensure
  consistent dependency management across the workspace.  This
  ensures the correct version of `zinit-client` is used everywhere.
2025-07-10 11:27:59 +03:00
fc2830da31 Merge pull request 'Kubernetes Clusters' (#22) from development_kubernetes_clusters into development
Reviewed-on: herocode/sal#22
2025-07-09 21:41:25 +00:00
Mahmoud-Emad
6b12001ca2 feat: Add Kubernetes examples and update dependencies
- Add Kubernetes examples demonstrating deployment of various
  applications (PostgreSQL, Redis, generic). This improves the
  documentation and provides practical usage examples.
- Add `tokio` dependency for async examples. This enables the use
  of asynchronous operations in the examples.
- Add `once_cell` dependency for improved resource management in
  Kubernetes module. This allows efficient management of
  singletons and other resources.
2025-07-10 00:40:11 +03:00
Mahmoud-Emad
99e121b0d8 feat: Providing some clusters for kubernetes 2025-07-08 16:37:10 +03:00
Mahmoud-Emad
502e345f91 feat: Enhance service manager with zinit socket discovery and systemd fallback
- Improve Linux support by automatically discovering zinit sockets
  using environment variables and common paths.
- Add fallback to systemd if no zinit server is detected.
- Enhance README with detailed instructions for zinit usage,
  including custom socket path configuration.
- Add example demonstrating zinit socket discovery.
- Add logging to show socket discovery process.
- Add unit tests for service manager creation and socket discovery.
2025-07-02 16:37:27 +03:00
Mahmoud-Emad
352e846410 feat: Improve Zinit service manager integration
- Handle arguments and working directory correctly in Zinit: The
  Zinit service manager now correctly handles arguments and
  working directories passed to services, ensuring consistent
  behavior across different service managers.  This fixes issues
  where commands would fail due to incorrect argument parsing or
  missing working directory settings.

- Simplify Zinit service configuration: The Zinit service
  configuration is now simplified, using a more concise and
  readable format. This improves maintainability and reduces the
  complexity of the service configuration process.

- Refactor Zinit service start: This refactors the Zinit service
  start functionality for better readability and maintainability.
  The changes improve the code structure and reduce the complexity
  of the code.
2025-07-02 13:39:11 +03:00
Mahmoud-Emad
b72c50bed9 feat: Improve publish-all.sh script to handle zinit_client
- Correctly handle the `zinit_client` crate name in package
  publication checks and messages.  The script previously
  failed to account for the difference between the directory
  name and the package name.
- Improve the clarity of published crate names in output messages
  for better user understanding.  This makes the output more
  consistent and user friendly.
2025-07-02 12:46:42 +03:00
Mahmoud-Emad
95122dffee feat: Improve service manager testing and error handling
- Add comprehensive testing instructions to README.
- Improve error handling in examples to prevent crashes.
- Enhance launchctl error handling for production safety.
- Improve zinit error handling for production safety.
- Remove obsolete plan_to_fix.md file.
- Update Rhai integration tests for improved robustness.
- Improve service manager creation on Linux with systemd fallback.
2025-07-02 12:05:03 +03:00
Mahmoud-Emad
a63cbe2bd9 Fix service manager examples to use production-ready API
- Updated simple_service.rs to use start(config) instead of create() + start(name)
- Updated service_spaghetti.rs to use the same unified API
- Fixed factory function calls to use create_service_manager() without parameters
- All examples now compile and work with the production-ready synchronous API
- Maintains backward compatibility while providing cleaner interface
2025-07-02 10:46:47 +03:00
Mahmoud-Emad
1e4c0ac41a Resolve merge conflicts - keep production-ready service manager implementation
- Resolved conflicts in service_manager/src/lib.rs
- Resolved conflicts in service_manager/src/launchctl.rs
- Resolved conflicts in service_manager/src/zinit.rs
- Resolved conflicts in service_manager/README.md
- Kept our production-ready synchronous API design
- Maintained comprehensive service lifecycle management
- Preserved cross-platform compatibility (macOS/Linux)
- All tests passing and ready for production use
2025-07-02 10:34:56 +03:00
Mahmoud-Emad
0e49be8d71 Merge branch 'main' of https://git.threefold.info/herocode/sal 2025-07-02 10:25:48 +03:00
Timur Gordon
32339e6063 service manager add examples and improvements 2025-07-02 05:50:18 +02:00
Mahmoud-Emad
131d978450 feat: Add service manager support
- Add a new service manager crate for dynamic service management
- Integrate service manager with Rhai for scripting
- Provide examples for circle worker management and basic usage
- Add comprehensive tests for service lifecycle and error handling
- Implement cross-platform support for macOS and Linux (zinit/systemd)
2025-07-01 18:00:21 +03:00
Mahmoud-Emad
46ad848e7e Merge branch 'main' of https://git.threefold.info/herocode/sal 2025-07-01 11:26:35 +03:00
Timur Gordon
b4e370b668 add service manager sal 2025-07-01 09:11:45 +02:00
ef8cc74d2b Merge pull request 'feat: Update SAL crate structure and documentation' (#18) from development_kubernetes into main
Some checks failed
Test Publishing Setup / Test Publishing Setup (push) Has been cancelled
Reviewed-on: herocode/sal#18
2025-07-01 06:05:31 +00:00
Mahmoud-Emad
23db07b0bd feat: Update SAL crate structure and documentation
Some checks failed
Test Publishing Setup / Test Publishing Setup (pull_request) Has been cancelled
- Reduced the number of SAL crates from 16 to 15.
- Removed redundant core modules from README examples.
- Updated README to reflect the current state of published crates.
- Added `Cargo.toml.bak` to `.gitignore` to prevent accidental commits.
- Improved the clarity and accuracy of the README's installation instructions.
- Updated `publish-all.sh` script to handle existing crate versions and improve dependency management.
2025-07-01 09:04:23 +03:00
b4dfa7733d Merge pull request 'development_kubernetes' (#17) from development_kubernetes into main
Some checks are pending
Test Publishing Setup / Test Publishing Setup (push) Waiting to run
Reviewed-on: herocode/sal#17
2025-07-01 05:35:06 +00:00
Mahmoud-Emad
e01b83f12a feat: Add CI/CD workflows for testing and publishing SAL crates
Some checks failed
Test Publishing Setup / Test Publishing Setup (pull_request) Has been cancelled
- Add a workflow for testing the publishing setup
- Add a workflow for publishing SAL crates to crates.io
- Improve crate metadata and version management
- Add optional dependencies for modularity
- Improve documentation for publishing and usage
2025-07-01 08:34:20 +03:00
Mahmoud-Emad
52f2f7e3c4 feat: Add Kubernetes module to SAL
- Add Kubernetes cluster management and operations
- Include pod, service, and deployment management
- Implement pattern-based resource deletion
- Support namespace creation and management
- Provide Rhai scripting wrappers for all functions
- Include production safety features (timeouts, retries, rate limiting)
2025-06-30 14:56:54 +03:00
717cd7b16f Merge pull request 'development_monorepo' (#13) from development_monorepo into main
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
Reviewed-on: herocode/sal#13
2025-06-24 09:40:00 +00:00
Mahmoud-Emad
e125bb6511 feat: Migrate SAL to Cargo workspace
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
- Migrate individual modules to independent crates
- Refactor dependencies for improved modularity
- Update build system and testing infrastructure
- Update documentation to reflect new structure
2025-06-24 12:39:18 +03:00
Mahmoud-Emad
8012a66250 feat: Add Rhai scripting support
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Add new `sal-rhai` crate for Rhai scripting integration
- Integrate Rhai with existing SAL modules
- Improve error handling for Rhai scripts and SAL functions
- Add comprehensive unit and integration tests for `sal-rhai`
2025-06-23 16:23:51 +03:00
Mahmoud-Emad
6dead402a2 feat: Remove herodo from monorepo and update dependencies
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Removed the `herodo` binary from the monorepo. This was
  done as part of the monorepo conversion process.
- Updated the `Cargo.toml` file to reflect the removal of
  `herodo` and adjust dependencies accordingly.
- Updated `src/lib.rs` and `src/rhai/mod.rs` to use the new
  `sal-vault` crate for vault functionality.  This improves
  the modularity and maintainability of the project.
2025-06-23 14:56:03 +03:00
Mahmoud-Emad
c94467c205 feat: Add herodo package to workspace
- Added the `herodo` package to the workspace.
- Updated the MONOREPO_CONVERSION_PLAN.md to reflect
  the completion of the herodo package conversion.
- Updated README.md and build_herodo.sh to reflect the
  new package structure.
- Created herodo/Cargo.toml, herodo/README.md,
  herodo/src/main.rs, herodo/src/lib.rs, and
  herodo/tests/integration_tests.rs and
  herodo/tests/unit_tests.rs.
2025-06-23 13:19:20 +03:00
Mahmoud-Emad
b737cd6337 feat: convert postgresclient module to independent sal-postgresclient package
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Move src/postgresclient/ to postgresclient/ package structure
- Add comprehensive test suite (28 tests) with real PostgreSQL operations
- Maintain Rhai integration with all 10 wrapper functions
- Update workspace configuration and dependencies
- Add complete documentation with usage examples
- Remove old module and update all references
- Ensure zero regressions in existing functionality

Closes: postgresclient monorepo conversion
2025-06-23 03:12:26 +03:00
Mahmoud-Emad
455f84528b feat: Add support for virt package
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Add sal-virt package to the workspace members
- Update MONOREPO_CONVERSION_PLAN.md to reflect the
  completion of sal-process and sal-virt packages
- Update src/lib.rs to include sal-virt
- Update src/postgresclient to use sal-virt instead of local
  virt module
- Update tests to use sal-virt
2025-06-23 02:37:14 +03:00
Mahmoud-Emad
3e3d0a1d45 feat: Add process package to monorepo
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Add `sal-process` package for cross-platform process management.
- Update workspace members in `Cargo.toml`.
- Mark process package as complete in MONOREPO_CONVERSION_PLAN.md
- Remove license information from `mycelium` and `os` READMEs.
2025-06-22 11:41:10 +03:00
Mahmoud-Emad
511729c477 feat: Add zinit_client package to workspace
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Add `zinit_client` package to the workspace, enabling its use
  in the SAL monorepo.  This allows for better organization and
  dependency management.
- Update `MONOREPO_CONVERSION_PLAN.md` to reflect the addition
  of `zinit_client` and its status.  This ensures the conversion
  plan stays up-to-date.
- Move `src/zinit_client/` directory to `zinit_client/` for better
   organization.  This improves the overall structure of the
   project.
- Update references to `zinit_client` to use the new path.  This
  ensures the codebase correctly links to the `zinit_client`
  package.
2025-06-22 10:59:19 +03:00
Mahmoud-Emad
74217364fa feat: Add sal-net package to workspace
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Add new sal-net package to the workspace.
- Update MONOREPO_CONVERSION_PLAN.md to reflect the
  addition of the sal-net package and mark it as
  production-ready.
- Add Cargo.toml and README.md for the sal-net package.
2025-06-22 09:52:20 +03:00
Mahmoud-Emad
d22fd686b7 feat: Add os package to monorepo conversion plan
- Added the `os` package to the list of converted packages in the
  monorepo conversion plan.
- Updated the success metrics and quality metrics sections to reflect
  the completion of the `os` package.  This ensures the plan
  accurately reflects the current state of the conversion.
2025-06-21 15:51:07 +03:00
Mahmoud-Emad
c4cdb8126c feat: Add support for new OS package
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Add a new `sal-os` package containing OS interaction utilities.
- Update workspace members to include the new package.
- Add README and basic usage examples for the new package.
2025-06-21 15:45:43 +03:00
Mahmoud-Emad
a35edc2030 docs: Update MONOREPO_CONVERSION_PLAN.md with text package status
- Marked the `text` package as production-ready in the
  conversion plan.
- Added quality metrics achieved for the `text` package,
  including test coverage, security features, and error
  handling.
- Updated the success metrics checklist to reflect the
  `text` package's completion.
2025-06-19 14:51:30 +03:00
Mahmoud-Emad
a7a7353aa1 feat: Add sal-text crate
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
- Add a new crate `sal-text` for text manipulation utilities.
- Integrate `sal-text` into the main `sal` crate.
- Remove the previous `text` module from `sal`.  This improves
  organization and allows for independent development of the
  `sal-text` library.
2025-06-19 14:43:27 +03:00
Mahmoud-Emad
4a8d3bfd24 feat: Add mycelium package to workspace
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Add the `mycelium` package to the workspace members.
- Add `sal-mycelium` dependency to `Cargo.toml`.
- Update MONOREPO_CONVERSION_PLAN.md to reflect the addition
  and completion of the mycelium package.
2025-06-19 12:11:55 +03:00
Mahmoud-Emad
3e617c2489 feat: Add redisclient package to the monorepo
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Integrate the redisclient package into the workspace.
- Update the MONOREPO_CONVERSION_PLAN.md to reflect the
  completion of the redisclient package conversion.
  This includes marking its conversion as complete and
  updating the success metrics.
- Add the redisclient package's Cargo.toml file.
- Add the redisclient package's source code files.
- Add tests for the redisclient package.
- Add README file for the redisclient package.
2025-06-18 17:53:03 +03:00
Mahmoud-Emad
4d51518f31 docs: Enhance MONOREPO_CONVERSION_PLAN.md with improved details
- Specify production-ready implementation details for sal-git
  package.
- Add a detailed code review and quality assurance process
  section.
- Include comprehensive success metrics and validation checklists
  for production readiness.
- Improve security considerations and risk mitigation strategies.
- Add stricter code review criteria based on sal-git's conversion.
- Update README with security configurations and environment
  variables.
2025-06-18 15:15:07 +03:00
Mahmoud-Emad
e031b03e04 feat: Convert SAL to a Rust monorepo
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Migrate SAL project from single-crate to monorepo structure
- Create independent packages for individual modules
- Improve build efficiency and testing capabilities
- Update documentation to reflect new structure
- Successfully convert the git module to an independent package.
2025-06-18 14:12:36 +03:00
ba9103685f ...
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
2025-06-16 07:53:03 +02:00
dee38eb6c2 ... 2025-06-16 07:30:37 +02:00
49c879359b ... 2025-06-15 23:44:59 +02:00
c0df07d6df ...
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-06-15 23:05:11 +02:00
6a1e70c484 ...
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-06-15 22:43:49 +02:00
e7e8e7daf8 ... 2025-06-15 22:40:28 +02:00
8a8ead17cb ... 2025-06-15 22:16:40 +02:00
0e7dba9466 ...
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-06-15 22:15:44 +02:00
f0d7636cda ...
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-06-15 22:12:15 +02:00
3a6bde02d5 ... 2025-06-15 21:51:23 +02:00
3a7b323f9a ...
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-06-15 21:50:10 +02:00
66d5c8588a ...
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-06-15 21:37:16 +02:00
29a06d2bb4 ... 2025-06-15 21:27:21 +02:00
Mahmoud-Emad
bb39f3e3f2 Merge branch 'main' of https://git.threefold.info/herocode/sal
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-06-15 20:36:12 +03:00
Mahmoud-Emad
5194f5245d chore: Update Gitea host and Zinit client dependency
- Updated the Gitea host URL in `.roo/mcp.json` and `Cargo.toml`
  to reflect the change from `git.ourworld.tf` to `git.threefold.info`.
- Updated the `zinit-client` dependency in `Cargo.toml` to version
  `0.3.0`.  This ensures compatibility with the updated repository.
- Updated file paths in example files to reflect the new repository URL.
2025-06-15 20:36:02 +03:00
a9255de679 Merge branch 'main' of git.threefold.info:herocode/sal 2025-06-15 16:21:04 +02:00
7d05567ad2 Update git remote URL from git.ourworld.tf to git.threefold.info 2025-06-15 16:21:03 +02:00
timurgordon
c0e11c6510 merge and fix tests
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
2025-05-23 21:46:11 +03:00
timurgordon
fedf957079 Merge branch 'development_lee' 2025-05-23 21:14:31 +03:00
timurgordon
65e404e517 merge branches and document
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-05-23 21:12:17 +03:00
944f22be23 Merge pull request 'clean up' (#12) from zinit_client into main
Reviewed-on: herocode/sal#12
2025-05-23 17:37:45 +00:00
887e66bb17 Merge pull request 'mycelium-rhai' (#9) from mycelium-rhai into main
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
Reviewed-on: herocode/sal#9
2025-05-23 16:40:28 +00:00
Lee Smet
e5a4a1b634 Add tests for symmetric keys
Some checks failed
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-16 15:19:45 +02:00
Lee Smet
7f55cf4fba Add tests for asymmetric keys, add public key export
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-16 15:05:45 +02:00
Maxime Van Hees
c26e0e5ad8 removed unused imports
Some checks failed
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
2025-05-16 15:04:00 +02:00
Lee Smet
365814b424 Fix signature key import/export, add tests
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-16 14:37:10 +02:00
Maxime Van Hees
cc4e087f1d fixed wrong link to file, again
Some checks failed
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
2025-05-16 14:32:35 +02:00
Maxime Van Hees
229fef217f fixed wrong link to file
Some checks failed
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
2025-05-16 14:32:03 +02:00
Maxime Van Hees
dd84ce3f48 remove command from tutorial
Some checks failed
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
2025-05-16 14:30:46 +02:00
Maxime Van Hees
7b8b8c662e Added tutorial on how to send/receive message with 2 mycelium peers running the same host
Some checks failed
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
2025-05-16 14:28:49 +02:00
Lee Smet
d29a8fbb67 Rename Store to Vault and move to lib root
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-16 11:24:27 +02:00
Maxime Van Hees
771df07c25 Add tutorial explain how to use Mycelium in Rhai scripts
Some checks failed
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
2025-05-16 10:25:51 +02:00
Maxime Van Hees
9a23c4cc09 sending and receiving message via Rhai script + added examples
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
2025-05-15 17:30:20 +02:00
Lee Smet
2014c63b78 Remove old files
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-15 13:53:54 +02:00
Lee Smet
2adda10664 Basic API
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-15 13:53:16 +02:00
Lee Smet
7b1908b676 Use kvstore as backing
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-14 19:21:02 +02:00
Lee Smet
e9b867a36e Individiual methods for keystores
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-14 11:49:36 +02:00
Lee Smet
78c0fd7871 Define the global KeySpace interface
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-14 11:10:52 +02:00
Lee Smet
e44ee83e74 Implement proper key types
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-13 17:37:33 +02:00
0c425470a5 Merge pull request 'Simplify and Refactor Asymmetric Encryption/Decryption' (#10) from development_fix_code into main
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
Reviewed-on: herocode/sal#10
2025-05-13 13:00:16 +00:00
Maxime Van Hees
3e64a53a83 working example for mycelium
Some checks failed
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
2025-05-13 14:10:32 +02:00
Maxime Van Hees
3225b3f029 corrected response mapping from API requests 2025-05-13 14:10:32 +02:00
Maxime Van Hees
3417e2c1ff fixed merge conflict 2025-05-13 14:10:10 +02:00
Mahmoud Emad
7add64562e feat: Simplify asymmetric encryption/decryption
Some checks failed
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
- Simplify asymmetric encryption by using a single symmetric key
  instead of deriving a key from an ephemeral key exchange.  This
  improves clarity and reduces complexity.
- The new implementation encrypts the symmetric key with the
  recipient's public key and then encrypts the message with the
  symmetric key.
- Improve test coverage for asymmetric encryption/decryption.
2025-05-13 14:45:05 +03:00
Mahmoud Emad
809599d60c fix: Get the code to compile 2025-05-13 14:12:48 +03:00
Mahmoud Emad
25f2ae6fa9 refactor: Refactor keypair and Ethereum wallet handling
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
- Moved `prepare_function_arguments` and `convert_token_to_rhai` to the `ethereum` module for better organization.
- Updated `keypair` module to use the new `session_manager` structure improving code clarity.
- Changed `KeyPair` type usage to new `vault::keyspace::keypair_types::KeyPair` for consistency.
- Improved error handling and clarity in `EthereumWallet` methods.
2025-05-13 13:55:04 +03:00
Lee Smet
dfe6c91273 Fix build issues
Signed-off-by: Lee Smet <lee.smet@hotmail.com>
2025-05-13 11:45:06 +02:00
a4438d63e0 ... 2025-05-13 08:02:23 +03:00
393c4270d4 ... 2025-05-13 07:28:02 +03:00
495fe92321 Merge branch 'development_keypair_tests'
* development_keypair_tests:
  feat: Add comprehensive test suite for Keypair module
2025-05-13 06:51:30 +03:00
577d80b282 restore 2025-05-13 06:51:20 +03:00
3f8aecb786 tests & fixes in kvs & keypair 2025-05-13 06:45:04 +03:00
Mahmoud Emad
c7a5699798 feat: Add comprehensive test suite for Keypair module
- Added tests for keypair creation and operations.
- Added tests for key space management.
- Added tests for session management and error handling.
- Added tests for asymmetric encryption and decryption.
- Improved error handling and reporting in the module.
2025-05-12 15:44:14 +03:00
Mahmoud Emad
3a0900fc15 refactor: Improve Rhai test runner and vault module code
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
- Updated the Rhai test runner script to correctly find test files.
- Improved the structure and formatting of the `vault.rs` module.
- Minor code style improvements in multiple files.
2025-05-12 12:47:37 +03:00
Maxime Van Hees
916eabfa42 clean up 2025-05-12 11:17:15 +02:00
a8ed0900fd ...
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-05-12 12:16:03 +03:00
e47e163285 ...
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-05-12 06:14:16 +03:00
8aa2b2da26 Merge branch 'zinit_client'
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
* zinit_client:
  working example to showcase zinit usage in Rhai scripts
  implemented zinit-client for integration with Rhai-scripts

# Conflicts:
#	Cargo.toml
#	src/lib.rs
#	src/rhai/mod.rs
2025-05-12 06:12:40 +03:00
992481ce1b Merge branch 'development_hero_vault'
* development_hero_vault:
  Add docs
  Support conatrcts call args in rhai bindings
  remove obsolete print
  feat: support interacting with smart contracts on EVM-based blockchains
  refactor and add peaq support
  feat: introduce hero_vault
2025-05-12 06:10:05 +03:00
516d0177e7 ...
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-05-12 06:09:25 +03:00
Mahmoud Emad
8285fdb7b9 Merge branch 'main' of https://git.ourworld.tf/herocode/sal
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
2025-05-10 08:50:16 +03:00
Mahmoud Emad
1ebd591f19 feat: Enhance documentation and add .gitignore entries
- Add new documentation sections for PostgreSQL installer
  functions and usage examples.  Improves clarity and
  completeness of the documentation.
- Add new files and patterns to .gitignore to prevent
  unnecessary files from being committed to the repository.
  Improves repository cleanliness and reduces clutter.
2025-05-10 08:50:05 +03:00
7298645368 Add docs 2025-05-10 01:21:10 +03:00
f669bdb84f Support conatrcts call args in rhai bindings
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
2025-05-10 00:42:21 +03:00
654f91b849 remove obsolete print 2025-05-09 19:15:34 +03:00
619ce57776 feat: support interacting with smart contracts on EVM-based blockchains
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-05-09 19:04:38 +03:00
2695b5f5f7 Merge remote-tracking branch 'origin/main' into development_hero_vault 2025-05-09 17:20:49 +03:00
7828f82f58 Merge pull request 'Implement PostgreSQL Installer Module for Rhai' (#6) from development_psgl_installer into main
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
Reviewed-on: herocode/sal#6
2025-05-09 14:12:57 +00:00
7a346a1dd1 Merge remote-tracking branch 'origin/main' into development_hero_vault
Some checks are pending
Rhai Tests / Run Rhai Tests (push) Waiting to run
2025-05-09 17:08:05 +03:00
07390c3cae refactor and add peaq support 2025-05-09 16:57:31 +03:00
Mahmoud Emad
138dce66fa feat: Enhance PostgreSQL installation with image pulling
Some checks failed
Rhai Tests / Run Rhai Tests (pull_request) Has been cancelled
- Pull the PostgreSQL image before installation to ensure the latest
  version is used. This improves reliability and reduces the chance of
  using outdated images.  Improves the robustness of the installation
  process.
- Added comprehensive unit tests for `PostgresInstallerConfig`,
  `PostgresInstallerError`, `install_postgres`, `create_database`,
  `execute_sql`, and `is_postgres_running` functions to ensure
  correctness and handle potential errors effectively.  Improves code
  quality and reduces the risk of regressions.
2025-05-09 16:13:24 +03:00
Mahmoud Emad
49e85ff8e6 docs: Enhance PostgreSQL client module documentation
Some checks failed
Rhai Tests / Run Rhai Tests (push) Has been cancelled
- Add details about the new PostgreSQL installer feature.
- Include prerequisites for installer and basic operations.
- Expand test file descriptions with installer details.
- Add descriptions for installer functions.
- Include example usage for both basic operations and installer.
2025-05-09 15:47:26 +03:00
Maxime Van Hees
f386890a8a working example to showcase zinit usage in Rhai scripts 2025-05-09 11:53:09 +02:00
Maxime Van Hees
61bd58498a implemented zinit-client for integration with Rhai-scripts 2025-05-08 17:03:00 +02:00
98ab2e1536 feat: introduce hero_vault 2025-05-08 17:44:37 +03:00
821 changed files with 85978 additions and 3470 deletions

227
.github/workflows/publish.yml vendored Normal file
View File

@@ -0,0 +1,227 @@
name: Publish SAL Crates
on:
release:
types: [published]
workflow_dispatch:
inputs:
version:
description: 'Version to publish (e.g., 0.1.0)'
required: true
type: string
dry_run:
description: 'Dry run (do not actually publish)'
required: false
type: boolean
default: false
env:
CARGO_TERM_COLOR: always
jobs:
publish:
name: Publish to crates.io
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
with:
toolchain: stable
- name: Cache Cargo dependencies
uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-
- name: Install cargo-edit for version management
run: cargo install cargo-edit
- name: Set version from release tag
if: github.event_name == 'release'
run: |
VERSION=${GITHUB_REF#refs/tags/v}
echo "PUBLISH_VERSION=$VERSION" >> $GITHUB_ENV
echo "Publishing version: $VERSION"
- name: Set version from workflow input
if: github.event_name == 'workflow_dispatch'
run: |
echo "PUBLISH_VERSION=${{ github.event.inputs.version }}" >> $GITHUB_ENV
echo "Publishing version: ${{ github.event.inputs.version }}"
- name: Update version in all crates
run: |
echo "Updating version to $PUBLISH_VERSION"
# Update root Cargo.toml
cargo set-version $PUBLISH_VERSION
# Update each crate
CRATES=(os process text net git vault kubernetes virt redisclient postgresclient zinit_client mycelium rhai)
for crate in "${CRATES[@]}"; do
if [ -d "$crate" ]; then
cd "$crate"
cargo set-version $PUBLISH_VERSION
cd ..
echo "Updated $crate to version $PUBLISH_VERSION"
fi
done
- name: Run tests
run: cargo test --workspace --verbose
- name: Check formatting
run: cargo fmt --all -- --check
- name: Run clippy
run: cargo clippy --workspace --all-targets --all-features -- -D warnings
- name: Dry run publish (check packages)
run: |
echo "Checking all packages can be published..."
CRATES=(os process text net git vault kubernetes virt redisclient postgresclient zinit_client mycelium rhai)
for crate in "${CRATES[@]}"; do
if [ -d "$crate" ]; then
echo "Checking $crate..."
cd "$crate"
cargo publish --dry-run
cd ..
fi
done
echo "Checking main crate..."
cargo publish --dry-run
- name: Publish crates (dry run)
if: github.event.inputs.dry_run == 'true'
run: |
echo "🔍 DRY RUN MODE - Would publish the following crates:"
echo "Individual crates: sal-os, sal-process, sal-text, sal-net, sal-git, sal-vault, sal-kubernetes, sal-virt, sal-redisclient, sal-postgresclient, sal-zinit-client, sal-mycelium, sal-rhai"
echo "Meta-crate: sal"
echo "Version: $PUBLISH_VERSION"
- name: Publish individual crates
if: github.event.inputs.dry_run != 'true'
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
echo "Publishing individual crates..."
# Crates in dependency order
CRATES=(os process text net git vault kubernetes virt redisclient postgresclient zinit_client mycelium rhai)
for crate in "${CRATES[@]}"; do
if [ -d "$crate" ]; then
echo "Publishing sal-$crate..."
cd "$crate"
# Retry logic for transient failures
for attempt in 1 2 3; do
if cargo publish --token $CARGO_REGISTRY_TOKEN; then
echo "✅ sal-$crate published successfully"
break
else
if [ $attempt -eq 3 ]; then
echo "❌ Failed to publish sal-$crate after 3 attempts"
exit 1
else
echo "⚠️ Attempt $attempt failed, retrying in 30 seconds..."
sleep 30
fi
fi
done
cd ..
# Wait for crates.io to process
if [ "$crate" != "rhai" ]; then
echo "⏳ Waiting 30 seconds for crates.io to process..."
sleep 30
fi
fi
done
- name: Publish main crate
if: github.event.inputs.dry_run != 'true'
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
echo "Publishing main sal crate..."
# Wait a bit longer before publishing the meta-crate
echo "⏳ Waiting 60 seconds for all individual crates to be available..."
sleep 60
# Retry logic for the main crate
for attempt in 1 2 3; do
if cargo publish --token $CARGO_REGISTRY_TOKEN; then
echo "✅ Main sal crate published successfully"
break
else
if [ $attempt -eq 3 ]; then
echo "❌ Failed to publish main sal crate after 3 attempts"
exit 1
else
echo "⚠️ Attempt $attempt failed, retrying in 60 seconds..."
sleep 60
fi
fi
done
- name: Create summary
if: always()
run: |
echo "## 📦 SAL Publishing Summary" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Version:** $PUBLISH_VERSION" >> $GITHUB_STEP_SUMMARY
echo "**Trigger:** ${{ github.event_name }}" >> $GITHUB_STEP_SUMMARY
if [ "${{ github.event.inputs.dry_run }}" == "true" ]; then
echo "**Mode:** Dry Run" >> $GITHUB_STEP_SUMMARY
else
echo "**Mode:** Live Publishing" >> $GITHUB_STEP_SUMMARY
fi
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Published Crates" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "- sal-os" >> $GITHUB_STEP_SUMMARY
echo "- sal-process" >> $GITHUB_STEP_SUMMARY
echo "- sal-text" >> $GITHUB_STEP_SUMMARY
echo "- sal-net" >> $GITHUB_STEP_SUMMARY
echo "- sal-git" >> $GITHUB_STEP_SUMMARY
echo "- sal-vault" >> $GITHUB_STEP_SUMMARY
echo "- sal-kubernetes" >> $GITHUB_STEP_SUMMARY
echo "- sal-virt" >> $GITHUB_STEP_SUMMARY
echo "- sal-redisclient" >> $GITHUB_STEP_SUMMARY
echo "- sal-postgresclient" >> $GITHUB_STEP_SUMMARY
echo "- sal-zinit-client" >> $GITHUB_STEP_SUMMARY
echo "- sal-mycelium" >> $GITHUB_STEP_SUMMARY
echo "- sal-rhai" >> $GITHUB_STEP_SUMMARY
echo "- sal (meta-crate)" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Usage" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo '```bash' >> $GITHUB_STEP_SUMMARY
echo "# Individual crates" >> $GITHUB_STEP_SUMMARY
echo "cargo add sal-os sal-process sal-text" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "# Meta-crate with features" >> $GITHUB_STEP_SUMMARY
echo "cargo add sal --features core" >> $GITHUB_STEP_SUMMARY
echo "cargo add sal --features all" >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY

233
.github/workflows/test-publish.yml vendored Normal file
View File

@@ -0,0 +1,233 @@
name: Test Publishing Setup
on:
push:
branches: [ main, master ]
paths:
- '**/Cargo.toml'
- 'scripts/publish-all.sh'
- '.github/workflows/publish.yml'
pull_request:
branches: [ main, master ]
paths:
- '**/Cargo.toml'
- 'scripts/publish-all.sh'
- '.github/workflows/publish.yml'
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
test-publish-setup:
name: Test Publishing Setup
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
with:
toolchain: stable
- name: Cache Cargo dependencies
uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-publish-test-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-publish-test-
${{ runner.os }}-cargo-
- name: Install cargo-edit
run: cargo install cargo-edit
- name: Test workspace structure
run: |
echo "Testing workspace structure..."
# Check that all expected crates exist
EXPECTED_CRATES=(os process text net git vault kubernetes virt redisclient postgresclient zinit_client mycelium rhai herodo)
for crate in "${EXPECTED_CRATES[@]}"; do
if [ -d "$crate" ] && [ -f "$crate/Cargo.toml" ]; then
echo "✅ $crate exists"
else
echo "❌ $crate missing or invalid"
exit 1
fi
done
- name: Test feature configuration
run: |
echo "Testing feature configuration..."
# Test that features work correctly
cargo check --features os
cargo check --features process
cargo check --features text
cargo check --features net
cargo check --features git
cargo check --features vault
cargo check --features kubernetes
cargo check --features virt
cargo check --features redisclient
cargo check --features postgresclient
cargo check --features zinit_client
cargo check --features mycelium
cargo check --features rhai
echo "✅ All individual features work"
# Test feature groups
cargo check --features core
cargo check --features clients
cargo check --features infrastructure
cargo check --features scripting
echo "✅ All feature groups work"
# Test all features
cargo check --features all
echo "✅ All features together work"
- name: Test dry-run publishing
run: |
echo "Testing dry-run publishing..."
# Test each individual crate can be packaged
CRATES=(os process text net git vault kubernetes virt redisclient postgresclient zinit_client mycelium rhai)
for crate in "${CRATES[@]}"; do
echo "Testing sal-$crate..."
cd "$crate"
cargo publish --dry-run
cd ..
echo "✅ sal-$crate can be published"
done
# Test main crate
echo "Testing main sal crate..."
cargo publish --dry-run
echo "✅ Main sal crate can be published"
- name: Test publishing script
run: |
echo "Testing publishing script..."
# Make script executable
chmod +x scripts/publish-all.sh
# Test dry run
./scripts/publish-all.sh --dry-run --version 0.1.0-test
echo "✅ Publishing script works"
- name: Test version consistency
run: |
echo "Testing version consistency..."
# Get version from root Cargo.toml
ROOT_VERSION=$(grep '^version = ' Cargo.toml | head -1 | sed 's/version = "\(.*\)"/\1/')
echo "Root version: $ROOT_VERSION"
# Check all crates have the same version
CRATES=(os process text net git vault kubernetes virt redisclient postgresclient zinit_client mycelium rhai herodo)
for crate in "${CRATES[@]}"; do
if [ -f "$crate/Cargo.toml" ]; then
CRATE_VERSION=$(grep '^version = ' "$crate/Cargo.toml" | head -1 | sed 's/version = "\(.*\)"/\1/')
if [ "$CRATE_VERSION" = "$ROOT_VERSION" ]; then
echo "✅ $crate version matches: $CRATE_VERSION"
else
echo "❌ $crate version mismatch: $CRATE_VERSION (expected $ROOT_VERSION)"
exit 1
fi
fi
done
- name: Test metadata completeness
run: |
echo "Testing metadata completeness..."
# Check that all crates have required metadata
CRATES=(os process text net git vault kubernetes virt redisclient postgresclient zinit_client mycelium rhai)
for crate in "${CRATES[@]}"; do
echo "Checking sal-$crate metadata..."
cd "$crate"
# Check required fields exist
if ! grep -q '^name = "sal-' Cargo.toml; then
echo "❌ $crate missing or incorrect name"
exit 1
fi
if ! grep -q '^description = ' Cargo.toml; then
echo "❌ $crate missing description"
exit 1
fi
if ! grep -q '^repository = ' Cargo.toml; then
echo "❌ $crate missing repository"
exit 1
fi
if ! grep -q '^license = ' Cargo.toml; then
echo "❌ $crate missing license"
exit 1
fi
echo "✅ sal-$crate metadata complete"
cd ..
done
- name: Test dependency resolution
run: |
echo "Testing dependency resolution..."
# Test that all workspace dependencies resolve correctly
cargo tree --workspace > /dev/null
echo "✅ All dependencies resolve correctly"
# Test that there are no dependency conflicts
cargo check --workspace
echo "✅ No dependency conflicts"
- name: Generate publishing report
if: always()
run: |
echo "## 🧪 Publishing Setup Test Report" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### ✅ Tests Passed" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "- Workspace structure validation" >> $GITHUB_STEP_SUMMARY
echo "- Feature configuration testing" >> $GITHUB_STEP_SUMMARY
echo "- Dry-run publishing simulation" >> $GITHUB_STEP_SUMMARY
echo "- Publishing script validation" >> $GITHUB_STEP_SUMMARY
echo "- Version consistency check" >> $GITHUB_STEP_SUMMARY
echo "- Metadata completeness verification" >> $GITHUB_STEP_SUMMARY
echo "- Dependency resolution testing" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### 📦 Ready for Publishing" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "All SAL crates are ready for publishing to crates.io!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Individual Crates:** 13 modules" >> $GITHUB_STEP_SUMMARY
echo "**Meta-crate:** sal with optional features" >> $GITHUB_STEP_SUMMARY
echo "**Binary:** herodo script executor" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### 🚀 Next Steps" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "1. Create a release tag (e.g., v0.1.0)" >> $GITHUB_STEP_SUMMARY
echo "2. The publish workflow will automatically trigger" >> $GITHUB_STEP_SUMMARY
echo "3. All crates will be published to crates.io" >> $GITHUB_STEP_SUMMARY
echo "4. Users can install with: \`cargo add sal-os\` or \`cargo add sal --features all\`" >> $GITHUB_STEP_SUMMARY

46
.gitignore vendored
View File

@@ -22,4 +22,48 @@ Cargo.lock
/rhai_test_template
/rhai_test_download
/rhai_test_fs
run_rhai_tests.log
run_rhai_tests.log
new_location
log.txt
file.txt
fix_doc*
# Dependencies
/node_modules
# Production
/build
# Generated files
.docusaurus
.cache-loader
# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*
bun.lockb
bun.lock
yarn.lock
build.sh
build_dev.sh
develop.sh
docusaurus.config.ts
sidebars.ts
tsconfig.json
Cargo.toml.bak
for_augment
myenv.sh

View File

@@ -1,16 +0,0 @@
{
"mcpServers": {
"gitea": {
"command": "/Users/despiegk/hero/bin/mcpgitea",
"args": [
"-t", "stdio",
"--host", "https://gitea.com",
"--token", "5bd13c898368a2edbfcef43f898a34857b51b37a"
],
"env": {
"GITEA_HOST": "https://git.ourworld.tf/",
"GITEA_ACCESS_TOKEN": "5bd13c898368a2edbfcef43f898a34857b51b37a"
}
}
}
}

View File

@@ -4,51 +4,203 @@ version = "0.1.0"
edition = "2021"
authors = ["PlanetFirst <info@incubaid.com>"]
description = "System Abstraction Layer - A library for easy interaction with operating system features"
repository = "https://git.ourworld.tf/herocode/sal"
repository = "https://git.threefold.info/herocode/sal"
license = "Apache-2.0"
keywords = ["system", "os", "abstraction", "platform", "filesystem"]
categories = ["os", "filesystem", "api-bindings"]
readme = "README.md"
[dependencies]
tera = "1.19.0" # Template engine for text rendering
# Cross-platform functionality
[workspace]
members = [
"packages/clients/myceliumclient",
"packages/clients/postgresclient",
"packages/clients/redisclient",
"packages/clients/zinitclient",
"packages/clients/rfsclient",
"packages/core/net",
"packages/core/text",
"packages/crypt/vault",
"packages/data/ourdb",
"packages/data/radixtree",
"packages/data/tst",
"packages/system/git",
"packages/system/kubernetes",
"packages/system/os",
"packages/system/process",
"packages/system/virt",
"rhai",
"rhailib",
"herodo",
"packages/clients/hetznerclient",
"packages/ai/codemonkey",
]
resolver = "2"
[workspace.metadata]
# Workspace-level metadata
rust-version = "1.70.0"
[workspace.dependencies]
# Core shared dependencies with consistent versions
anyhow = "1.0.98"
base64 = "0.22.1"
bytes = "1.7.1"
dirs = "6.0.0"
env_logger = "0.11.8"
futures = "0.3.30"
glob = "0.3.1"
lazy_static = "1.4.0"
libc = "0.2"
cfg-if = "1.0"
thiserror = "1.0" # For error handling
redis = "0.22.0" # Redis client
postgres = "0.19.4" # PostgreSQL client
tokio-postgres = "0.7.8" # Async PostgreSQL client
postgres-types = "0.2.5" # PostgreSQL type conversions
lazy_static = "1.4.0" # For lazy initialization of static variables
regex = "1.8.1" # For regex pattern matching
serde = { version = "1.0", features = [
"derive",
] } # For serialization/deserialization
serde_json = "1.0" # For JSON handling
glob = "0.3.1" # For file pattern matching
tempfile = "3.5" # For temporary file operations
log = "0.4" # Logging facade
rhai = { version = "1.12.0", features = ["sync"] } # Embedded scripting language
rand = "0.8.5" # Random number generation
clap = "2.33" # Command-line argument parsing
r2d2 = "0.8.10"
log = "0.4"
once_cell = "1.18.0"
rand = "0.8.5"
regex = "1.8.1"
reqwest = { version = "0.12.15", features = ["json", "blocking"] }
rhai = { version = "1.12.0", features = ["sync"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tempfile = "3.5"
thiserror = "2.0.12"
tokio = { version = "1.45.0", features = ["full"] }
url = "2.4"
uuid = { version = "1.16.0", features = ["v4"] }
# Database dependencies
postgres = "0.19.10"
r2d2_postgres = "0.18.2"
redis = "0.31.0"
tokio-postgres = "0.7.13"
# Optional features for specific OS functionality
[target.'cfg(unix)'.dependencies]
nix = "0.26" # Unix-specific functionality
# Crypto dependencies
chacha20poly1305 = "0.10.1"
k256 = { version = "0.13.4", features = ["ecdsa", "ecdh"] }
sha2 = "0.10.7"
hex = "0.4"
bincode = { version = "2.0.1", features = ["serde"] }
pbkdf2 = "0.12.2"
getrandom = { version = "0.3.3", features = ["wasm_js"] }
tera = "1.19.0"
[target.'cfg(windows)'.dependencies]
windows = { version = "0.48", features = [
# Ethereum dependencies
ethers = { version = "2.0.7", features = ["legacy"] }
# Platform-specific dependencies
nix = "0.30.1"
windows = { version = "0.61.1", features = [
"Win32_Foundation",
"Win32_System_Threading",
"Win32_Storage_FileSystem",
] }
[dev-dependencies]
tempfile = "3.5" # For tests that need temporary files/directories
# Specialized dependencies
zinit-client = "0.4.0"
urlencoding = "2.1.3"
tokio-test = "0.4.4"
kube = { version = "0.95.0", features = ["client", "config", "derive"] }
k8s-openapi = { version = "0.23.0", features = ["latest"] }
tokio-retry = "0.3.0"
governor = "0.6.3"
tower = { version = "0.5.2", features = ["timeout", "limit"] }
serde_yaml = "0.9"
postgres-types = "0.2.5"
r2d2 = "0.8.10"
[[bin]]
name = "herodo"
path = "src/bin/herodo.rs"
# SAL dependencies
sal-git = { path = "packages/system/git" }
sal-kubernetes = { path = "packages/system/kubernetes" }
sal-redisclient = { path = "packages/clients/redisclient" }
sal-mycelium = { path = "packages/clients/myceliumclient" }
sal-hetzner = { path = "packages/clients/hetznerclient" }
sal-rfs-client = { path = "packages/clients/rfsclient" }
sal-text = { path = "packages/core/text" }
sal-os = { path = "packages/system/os" }
sal-net = { path = "packages/core/net" }
sal-zinit-client = { path = "packages/clients/zinitclient" }
sal-process = { path = "packages/system/process" }
sal-virt = { path = "packages/system/virt" }
sal-postgresclient = { path = "packages/clients/postgresclient" }
sal-vault = { path = "packages/crypt/vault" }
sal-rhai = { path = "rhai" }
sal-service-manager = { path = "_archive/service_manager" }
[dependencies]
thiserror = { workspace = true }
tokio = { workspace = true }
# Optional dependencies - users can choose which modules to include
sal-git = { workspace = true, optional = true }
sal-kubernetes = { workspace = true, optional = true }
sal-redisclient = { workspace = true, optional = true }
sal-mycelium = { workspace = true, optional = true }
sal-hetzner = { workspace = true, optional = true }
sal-rfs-client = { workspace = true, optional = true }
sal-text = { workspace = true, optional = true }
sal-os = { workspace = true, optional = true }
sal-net = { workspace = true, optional = true }
sal-zinit-client = { workspace = true, optional = true }
sal-process = { workspace = true, optional = true }
sal-virt = { workspace = true, optional = true }
sal-postgresclient = { workspace = true, optional = true }
sal-vault = { workspace = true, optional = true }
sal-rhai = { workspace = true, optional = true }
sal-service-manager = { workspace = true, optional = true }
[features]
default = []
# Individual module features
git = ["dep:sal-git"]
kubernetes = ["dep:sal-kubernetes"]
redisclient = ["dep:sal-redisclient"]
mycelium = ["dep:sal-mycelium"]
hetzner = ["dep:sal-hetzner"]
rfsclient = ["dep:sal-rfs-client"]
text = ["dep:sal-text"]
os = ["dep:sal-os"]
net = ["dep:sal-net"]
zinit_client = ["dep:sal-zinit-client"]
process = ["dep:sal-process"]
virt = ["dep:sal-virt"]
postgresclient = ["dep:sal-postgresclient"]
vault = ["dep:sal-vault"]
rhai = ["dep:sal-rhai"]
# service_manager is removed as it's not a direct member anymore
# Convenience feature groups
core = ["os", "process", "text", "net"]
clients = ["redisclient", "postgresclient", "zinit_client", "mycelium", "hetzner", "rfsclient"]
infrastructure = ["git", "vault", "kubernetes", "virt"]
scripting = ["rhai"]
all = [
"git",
"kubernetes",
"redisclient",
"mycelium",
"hetzner",
"rfsclient",
"text",
"os",
"net",
"zinit_client",
"process",
"virt",
"postgresclient",
"vault",
"rhai",
]
# Examples
[[example]]
name = "postgres_cluster"
path = "examples/kubernetes/clusters/postgres.rs"
required-features = ["kubernetes"]
[[example]]
name = "redis_cluster"
path = "examples/kubernetes/clusters/redis.rs"
required-features = ["kubernetes"]
[[example]]
name = "generic_cluster"
path = "examples/kubernetes/clusters/generic.rs"
required-features = ["kubernetes"]

239
PUBLISHING.md Normal file
View File

@@ -0,0 +1,239 @@
# SAL Publishing Guide
This guide explains how to publish SAL crates to crates.io and how users can consume them.
## 🎯 Publishing Strategy
SAL uses a **modular publishing approach** where each module is published as an individual crate. This allows users to install only the functionality they need, reducing compilation time and binary size.
## 📦 Crate Structure
### Individual Crates
Each SAL module is published as a separate crate:
| Crate Name | Description | Category |
|------------|-------------|----------|
| `sal-os` | Operating system operations | Core |
| `sal-process` | Process management | Core |
| `sal-text` | Text processing utilities | Core |
| `sal-net` | Network operations | Core |
| `sal-git` | Git repository management | Infrastructure |
| `sal-vault` | Cryptographic operations | Infrastructure |
| `sal-kubernetes` | Kubernetes cluster management | Infrastructure |
| `sal-virt` | Virtualization tools (Buildah, nerdctl) | Infrastructure |
| `sal-redisclient` | Redis database client | Clients |
| `sal-postgresclient` | PostgreSQL database client | Clients |
| `sal-zinit-client` | Zinit process supervisor client | Clients |
| `sal-mycelium` | Mycelium network client | Clients |
| `sal-rhai` | Rhai scripting integration | Scripting |
### Meta-crate
The main `sal` crate serves as a meta-crate that re-exports all modules with optional features:
```toml
[dependencies]
sal = { version = "0.1.0", features = ["os", "process", "text"] }
```
## 🚀 Publishing Process
### Prerequisites
1. **Crates.io Account**: Ensure you have a crates.io account and API token
2. **Repository Access**: Ensure the repository URL is accessible
3. **Version Consistency**: All crates should use the same version number
### Publishing Individual Crates
Each crate can be published independently:
```bash
# Publish core modules
cd os && cargo publish
cd ../process && cargo publish
cd ../text && cargo publish
cd ../net && cargo publish
# Publish infrastructure modules
cd ../git && cargo publish
cd ../vault && cargo publish
cd ../kubernetes && cargo publish
cd ../virt && cargo publish
# Publish client modules
cd ../redisclient && cargo publish
cd ../postgresclient && cargo publish
cd ../zinit_client && cargo publish
cd ../mycelium && cargo publish
# Publish scripting module
cd ../rhai && cargo publish
# Finally, publish the meta-crate
cd .. && cargo publish
```
### Automated Publishing
Use the comprehensive publishing script:
```bash
# Test the publishing process (safe)
./scripts/publish-all.sh --dry-run --version 0.1.0
# Actually publish to crates.io
./scripts/publish-all.sh --version 0.1.0
```
The script handles:
-**Dependency order** - Publishes crates in correct dependency order
-**Path dependencies** - Automatically updates path deps to version deps
-**Rate limiting** - Waits between publishes to avoid rate limits
-**Error handling** - Stops on failures with clear error messages
-**Dry run mode** - Test without actually publishing
## 👥 User Consumption
### Installation Options
#### Option 1: Individual Crates (Recommended)
Users install only what they need:
```bash
# Core functionality
cargo add sal-os sal-process sal-text sal-net
# Database operations
cargo add sal-redisclient sal-postgresclient
# Infrastructure management
cargo add sal-git sal-vault sal-kubernetes
# Service integration
cargo add sal-zinit-client sal-mycelium
# Scripting
cargo add sal-rhai
```
**Usage:**
```rust
use sal_os::fs;
use sal_process::run;
use sal_git::GitManager;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let files = fs::list_files(".")?;
let result = run::command("echo hello")?;
let git = GitManager::new(".")?;
Ok(())
}
```
#### Option 2: Meta-crate with Features
Users can use the main crate with selective features:
```bash
# Specific modules
cargo add sal --features os,process,text
# Feature groups
cargo add sal --features core # os, process, text, net
cargo add sal --features clients # redisclient, postgresclient, zinit_client, mycelium
cargo add sal --features infrastructure # git, vault, kubernetes, virt
cargo add sal --features scripting # rhai
# Everything
cargo add sal --features all
```
**Usage:**
```rust
// Cargo.toml: sal = { version = "0.1.0", features = ["os", "process", "git"] }
use sal::os::fs;
use sal::process::run;
use sal::git::GitManager;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let files = fs::list_files(".")?;
let result = run::command("echo hello")?;
let git = GitManager::new(".")?;
Ok(())
}
```
### Feature Groups
The meta-crate provides convenient feature groups:
- **`core`**: Essential system operations (os, process, text, net)
- **`clients`**: Database and service clients (redisclient, postgresclient, zinit_client, mycelium)
- **`infrastructure`**: Infrastructure management tools (git, vault, kubernetes, virt)
- **`scripting`**: Rhai scripting support (rhai)
- **`all`**: Everything included
## 📋 Version Management
### Semantic Versioning
All SAL crates follow semantic versioning:
- **Major version**: Breaking API changes
- **Minor version**: New features, backward compatible
- **Patch version**: Bug fixes, backward compatible
### Synchronized Releases
All crates are released with the same version number to ensure compatibility:
```toml
# All crates use the same version
sal-os = "0.1.0"
sal-process = "0.1.0"
sal-git = "0.1.0"
# etc.
```
## 🔧 Maintenance
### Updating Dependencies
When updating dependencies:
1. Update `Cargo.toml` in the workspace root
2. Update individual crate dependencies if needed
3. Test all crates: `cargo test --workspace`
4. Publish with incremented version numbers
### Adding New Modules
To add a new SAL module:
1. Create the new crate directory
2. Add to workspace members in root `Cargo.toml`
3. Add optional dependency in root `Cargo.toml`
4. Add feature flag in root `Cargo.toml`
5. Add conditional re-export in `src/lib.rs`
6. Update documentation
## 🎉 Benefits
### For Users
- **Minimal Dependencies**: Install only what you need
- **Faster Builds**: Smaller dependency trees compile faster
- **Smaller Binaries**: Reduced binary size
- **Clear Dependencies**: Explicit about what functionality is used
### For Maintainers
- **Independent Releases**: Can release individual crates as needed
- **Focused Testing**: Test individual modules in isolation
- **Clear Ownership**: Each crate has clear responsibility
- **Easier Maintenance**: Smaller, focused codebases
This publishing strategy provides the best of both worlds: modularity for users who want minimal dependencies, and convenience for users who prefer a single crate with features.

165
README.md
View File

@@ -1,73 +1,136 @@
# SAL (System Abstraction Layer)
# Herocode Herolib Rust Repository
A Rust library that provides a unified interface for interacting with operating system features across different platforms. It abstracts away platform-specific details, allowing developers to write cross-platform code with ease.
## Overview
## Features
This repository contains the **Herocode Herolib** Rust library and a collection of scripts, examples, and utilities for building, testing, and publishing the SAL (System Abstraction Layer) crates. The repository includes:
- **File System Operations**: Simplified file and directory management
- **Process Management**: Create, monitor, and control processes
- **System Information**: Access system details and metrics
- **Git Integration**: Interface with Git repositories
- **Redis Client**: Robust Redis connection management and command execution
- **Text Processing**: Utilities for text manipulation and formatting
- **Rust crates** for various system components (e.g., `os`, `process`, `text`, `git`, `vault`, `kubernetes`, etc.).
- **Rhai scripts** and test suites for each crate.
- **Utility scripts** to automate common development tasks.
## Modules
## Scripts
### Redis Client
The repository provides three primary helper scripts located in the repository root:
The Redis client module provides a robust wrapper around the Redis client library for Rust, offering:
| Script | Description | Typical Usage |
|--------|-------------|--------------|
| `scripts/publish-all.sh` | Publishes all SAL crates to **crates.io** in the correct dependency order. Handles version bumping, dependency updates, dryrun mode, and ratelimiting. | `./scripts/publish-all.sh [--dry-run] [--wait <seconds>] [--version <ver>]` |
| `build_herodo.sh` | Builds the `herodo` binary from the `herodo` package and optionally runs a specified Rhai script. | `./build_herodo.sh [script_name]` |
| `run_rhai_tests.sh` | Executes all Rhai test suites across the repository, logging results and providing a summary. | `./run_rhai_tests.sh` |
- Automatic connection management and reconnection
- Support for both Unix socket and TCP connections
- Database selection via environment variables
- Thread-safe global client instance
- Simple command execution interface
Below are detailed usage instructions for each script.
[View Redis Client Documentation](src/redisclient/README.md)
---
### OS Module
## 1. `scripts/publish-all.sh`
Provides platform-independent interfaces for operating system functionality.
### Purpose
### Git Module
- Publishes each SAL crate in the correct dependency order.
- Updates crate versions (if `--version` is supplied).
- Updates path dependencies to version dependencies before publishing.
- Supports **dryrun** mode to preview actions without publishing.
- Handles ratelimiting between crate publishes.
Tools for interacting with Git repositories programmatically.
### Options
### Process Module
| Option | Description |
|--------|-------------|
| `--dry-run` | Shows what would be published without actually publishing. |
| `--wait <seconds>` | Wait time between publishes (default: 15s). |
| `--version <ver>` | Set a new version for all crates (updates `Cargo.toml` files). |
| `-h, --help` | Show help message. |
Utilities for process creation, monitoring, and management.
### Example Usage
### Text Module
```bash
# Dry run no crates will be published
./scripts/publish-all.sh --dry-run
Text processing utilities for common operations.
# Publish with a custom wait time and version bump
./scripts/publish-all.sh --wait 30 --version 1.2.3
## Usage
Add this to your `Cargo.toml`:
```toml
[dependencies]
sal = "0.1.0"
# Normal publish (no dryrun)
./scripts/publish-all.sh
```
Basic example:
### Notes
```rust
use sal::redisclient::execute;
use redis::cmd;
- Must be run from the repository root (where `Cargo.toml` lives).
- Requires `cargo` and a loggedin `cargo` session (`cargo login`).
- The script automatically updates dependencies in each crates `Cargo.toml` to use the new version before publishing.
fn main() -> redis::RedisResult<()> {
// Execute a Redis command
let mut cmd = redis::cmd("SET");
cmd.arg("example_key").arg("example_value");
execute(&mut cmd)?;
// Retrieve the value
let mut get_cmd = redis::cmd("GET");
get_cmd.arg("example_key");
let value: String = execute(&mut get_cmd)?;
println!("Value: {}", value);
Ok(())
}
---
## 2. `build_herodo.sh`
### Purpose
- Builds the `herodo` binary from the `herodo` package.
- Copies the binary to a systemwide location (`/usr/local/bin`) if run as root, otherwise to `~/hero/bin`.
- Optionally runs a specified Rhai script after building.
### Usage
```bash
# Build only
./build_herodo.sh
# Build and run a specific Rhai script (e.g., `example`):
./build_herodo.sh example
```
### Details
- The script changes to its own directory, builds the `herodo` crate (`cargo build`), and copies the binary.
- If a script name is provided, it looks for the script in:
- `src/rhaiexamples/<name>.rhai`
- `src/herodo/scripts/<name>.rhai`
- If the script is not found, the script exits with an error.
---
## 3. `run_rhai_tests.sh`
### Purpose
- Runs **all** Rhai test suites across the repository.
- Supports both the legacy `rhai_tests` directory and the newer `*/tests/rhai` layout.
- Logs output to `run_rhai_tests.log` and prints a summary.
### Usage
```bash
# Run all tests
./run_rhai_tests.sh
```
### Output
- Colored console output for readability.
- Log file (`run_rhai_tests.log`) contains full output for later review.
- Summary includes total modules, passed, and failed counts.
- Exit code `0` if all tests pass, `1` otherwise.
---
## General Development Workflow
1. **Build**: Use `build_herodo.sh` to compile the `herodo` binary.
2. **Test**: Run `run_rhai_tests.sh` to ensure all Rhai scripts pass.
3. **Publish**: When ready to release, use `scripts/publish-all.sh` (with `--dry-run` first to verify).
## Prerequisites
- **Rust toolchain** (`cargo`, `rustc`) installed.
- **Rhai** interpreter (`herodo`) built and available.
- **Git** for version control.
- **Cargo login** for publishing to crates.io.
## License
See `LICENSE` for details.
---
**Happy coding!**

View File

@@ -6,10 +6,12 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
rm -f ./target/debug/herodo
# Build the herodo project
echo "Building herodo..."
cargo build --bin herodo
# cargo build --release --bin herodo
# Build the herodo project from the herodo package
echo "Building herodo from herodo package..."
cd herodo
cargo build
# cargo build --release
cd ..
# Check if the build was successful
if [ $? -ne 0 ]; then
@@ -20,8 +22,14 @@ fi
# Echo a success message
echo "Build successful!"
mkdir -p ~/hero/bin/
cp target/debug/herodo ~/hero/bin/herodo
if [ "$EUID" -eq 0 ]; then
echo "Running as root, copying to /usr/local/bin/"
cp target/debug/herodo /usr/local/bin/herodo
else
echo "Running as non-root user, copying to ~/hero/bin/"
mkdir -p ~/hero/bin/
cp target/debug/herodo ~/hero/bin/herodo
fi
# Check if a script name was provided
if [ $# -eq 1 ]; then

0
cargo_instructions.md Normal file
View File

14
config/README.md Normal file
View File

@@ -0,0 +1,14 @@
# Environment Configuration
To set up your environment variables:
1. Copy the template file to `env.sh`:
```bash
cp config/myenv_templ.sh config/env.sh
```
2. Edit `config/env.sh` and fill in your specific values for the variables.
3. This file (`config/env.sh`) is excluded from version control by the project's `.gitignore` configuration, ensuring your sensitive information remains local and is never committed to the repository.

6
config/myenv_templ.sh Normal file
View File

@@ -0,0 +1,6 @@
export OPENROUTER_API_KEY=""
export GROQ_API_KEY=""
export CEREBRAS_API_KEY=""
export OPENAI_API_KEY="sk-xxxxxxx"

View File

@@ -16,13 +16,13 @@ Additionally, there's a runner script (`run_all_tests.rhai`) that executes all t
To run all tests, execute the following command from the project root:
```bash
herodo --path src/rhai_tests/git/run_all_tests.rhai
herodo --path git/tests/rhai/run_all_tests.rhai
```
To run individual test scripts:
```bash
herodo --path src/rhai_tests/git/01_git_basic.rhai
herodo --path git/tests/rhai/01_git_basic.rhai
```
## Test Details

View File

@@ -0,0 +1,386 @@
# Mycelium Tutorial for Rhai
This tutorial explains how to use the Mycelium networking functionality in Rhai scripts. Mycelium is a peer-to-peer networking system that allows nodes to communicate with each other, and the Rhai bindings provide an easy way to interact with Mycelium from your scripts.
## Introduction
The Mycelium module for Rhai provides the following capabilities:
- Getting node information
- Managing peers (listing, adding, removing)
- Viewing routing information
- Sending and receiving messages between nodes
This tutorial will walk you through using these features with example scripts.
## Prerequisites
Before using the Mycelium functionality in Rhai, you need:
1. A running Mycelium node accessible via HTTP
> See https://github.com/threefoldtech/mycelium
2. The Rhai runtime with Mycelium module enabled
## Basic Mycelium Operations
Let's start by exploring the basic operations available in Mycelium using the `mycelium_basic.rhai` example.
### Getting Node Information
To get information about your Mycelium node:
```rhai
// API URL for Mycelium
let api_url = "http://localhost:8989";
// Get node information
print("Getting node information:");
try {
let node_info = mycelium_get_node_info(api_url);
print(`Node subnet: ${node_info.nodeSubnet}`);
print(`Node public key: ${node_info.nodePubkey}`);
} catch(err) {
print(`Error getting node info: ${err}`);
}
```
This code:
1. Sets the API URL for your Mycelium node
2. Calls `mycelium_get_node_info()` to retrieve information about the node
3. Prints the node's subnet and public key
### Managing Peers
#### Listing Peers
To list all peers connected to your Mycelium node:
```rhai
// List all peers
print("\nListing all peers:");
try {
let peers = mycelium_list_peers(api_url);
if peers.is_empty() {
print("No peers connected.");
} else {
for peer in peers {
print(`Peer Endpoint: ${peer.endpoint.proto}://${peer.endpoint.socketAddr}`);
print(` Type: ${peer.type}`);
print(` Connection State: ${peer.connectionState}`);
print(` Bytes sent: ${peer.txBytes}`);
print(` Bytes received: ${peer.rxBytes}`);
}
}
} catch(err) {
print(`Error listing peers: ${err}`);
}
```
This code:
1. Calls `mycelium_list_peers()` to get all connected peers
2. Iterates through the peers and prints their details
#### Adding a Peer
To add a new peer to your Mycelium node:
```rhai
// Add a new peer
print("\nAdding a new peer:");
let new_peer_address = "tcp://65.21.231.58:9651";
try {
let result = mycelium_add_peer(api_url, new_peer_address);
print(`Peer added: ${result.success}`);
} catch(err) {
print(`Error adding peer: ${err}`);
}
```
This code:
1. Specifies a peer address to add
2. Calls `mycelium_add_peer()` to add the peer to your node
3. Prints whether the operation was successful
#### Removing a Peer
To remove a peer from your Mycelium node:
```rhai
// Remove a peer
print("\nRemoving a peer:");
let peer_id = "tcp://65.21.231.58:9651"; // This is the peer we added earlier
try {
let result = mycelium_remove_peer(api_url, peer_id);
print(`Peer removed: ${result.success}`);
} catch(err) {
print(`Error removing peer: ${err}`);
}
```
This code:
1. Specifies the peer ID to remove
2. Calls `mycelium_remove_peer()` to remove the peer
3. Prints whether the operation was successful
### Viewing Routing Information
#### Listing Selected Routes
To list the selected routes in your Mycelium node:
```rhai
// List selected routes
print("\nListing selected routes:");
try {
let routes = mycelium_list_selected_routes(api_url);
if routes.is_empty() {
print("No selected routes.");
} else {
for route in routes {
print(`Subnet: ${route.subnet}`);
print(` Next hop: ${route.nextHop}`);
print(` Metric: ${route.metric}`);
}
}
} catch(err) {
print(`Error listing routes: ${err}`);
}
```
This code:
1. Calls `mycelium_list_selected_routes()` to get all selected routes
2. Iterates through the routes and prints their details
#### Listing Fallback Routes
To list the fallback routes in your Mycelium node:
```rhai
// List fallback routes
print("\nListing fallback routes:");
try {
let routes = mycelium_list_fallback_routes(api_url);
if routes.is_empty() {
print("No fallback routes.");
} else {
for route in routes {
print(`Subnet: ${route.subnet}`);
print(` Next hop: ${route.nextHop}`);
print(` Metric: ${route.metric}`);
}
}
} catch(err) {
print(`Error listing fallback routes: ${err}`);
}
```
This code:
1. Calls `mycelium_list_fallback_routes()` to get all fallback routes
2. Iterates through the routes and prints their details
## Sending Messages
Now let's look at how to send messages using the `mycelium_send_message.rhai` example.
```rhai
// API URL for Mycelium
let api_url = "http://localhost:1111";
// Send a message
print("\nSending a message:");
let destination = "5af:ae6b:dcd8:ffdb:b71:7dde:d3:1033"; // Replace with the actual destination IP address
let topic = "test_topic";
let message = "Hello from Rhai sender!";
let deadline_secs = -10; // Seconds we wait for a reply
try {
print(`Attempting to send message to ${destination} on topic '${topic}'`);
let result = mycelium_send_message(api_url, destination, topic, message, deadline_secs);
print(`result: ${result}`);
print(`Message sent: ${result.success}`);
if result.id != "" {
print(`Message ID: ${result.id}`);
}
} catch(err) {
print(`Error sending message: ${err}`);
}
```
This code:
1. Sets the API URL for your Mycelium node
2. Specifies the destination IP address, topic, message content, and deadline
3. Calls `mycelium_send_message()` to send the message
4. Prints the result, including the message ID if successful
### Important Parameters for Sending Messages
- `api_url`: The URL of your Mycelium node's API
- `destination`: The IP address of the destination node
- `topic`: The topic to send the message on (must match what the receiver is listening for)
- `message`: The content of the message
- `deadline_secs`: Time in seconds to wait for a reply. Use a negative value if you don't want to wait for a reply.
## Receiving Messages
Now let's look at how to receive messages using the `mycelium_receive_message.rhai` example.
```rhai
// API URL for Mycelium
let api_url = "http://localhost:2222";
// Receive messages
print("\nReceiving messages:");
let receive_topic = "test_topic";
let wait_deadline_secs = 100;
print(`Listening for messages on topic '${receive_topic}'...`);
try {
let messages = mycelium_receive_messages(api_url, receive_topic, wait_deadline_secs);
if messages.is_empty() {
// print("No new messages received in this poll.");
} else {
print("Received a message:");
print(` Message id: ${messages.id}`);
print(` Message from: ${messages.srcIp}`);
print(` Topic: ${messages.topic}`);
print(` Payload: ${messages.payload}`);
}
} catch(err) {
print(`Error receiving messages: ${err}`);
}
print("Finished attempting to receive messages.");
```
This code:
1. Sets the API URL for your Mycelium node
2. Specifies the topic to listen on and how long to wait for messages
3. Calls `mycelium_receive_messages()` to receive messages
4. Processes and prints any received messages
### Important Parameters for Receiving Messages
- `api_url`: The URL of your Mycelium node's API
- `receive_topic`: The topic to listen for messages on (must match what the sender is using)
- `wait_deadline_secs`: Time in seconds to wait for messages to arrive. The function will block for this duration if no messages are immediately available.
## Complete Messaging Example
To set up a complete messaging system, you would typically run two instances of Mycelium (node A sender, node B receiver).
1. Run the `mycelium_receive_message.rhai` script to listen for messages. **Fill in the API address of node B**.
2. Run the `mycelium_send_message.rhai` script to send messages. **Fill in the API address of node A, and fill in the overlay address of node B as destination**.
### Setting Up the Receiver
First, start a Mycelium node and run the receiver script:
```rhai
// API URL for Mycelium
let api_url = "http://localhost:2222"; // Your receiver node's API URL
// Receive messages
let receive_topic = "test_topic";
let wait_deadline_secs = 100; // Wait up to 100 seconds for messages
print(`Listening for messages on topic '${receive_topic}'...`);
try {
let messages = mycelium_receive_messages(api_url, receive_topic, wait_deadline_secs);
if messages.is_empty() {
print("No new messages received in this poll.");
} else {
print("Received a message:");
print(` Message id: ${messages.id}`);
print(` Message from: ${messages.srcIp}`);
print(` Topic: ${messages.topic}`);
print(` Payload: ${messages.payload}`);
}
} catch(err) {
print(`Error receiving messages: ${err}`);
}
```
### Setting Up the Sender
Then, on another Mycelium node, run the sender script:
```rhai
// API URL for Mycelium
let api_url = "http://localhost:1111"; // Your sender node's API URL
// Send a message
let destination = "5af:ae6b:dcd8:ffdb:b71:7dde:d3:1033"; // The receiver node's IP address
let topic = "test_topic"; // Must match the receiver's topic
let message = "Hello from Rhai sender!";
let deadline_secs = -10; // Don't wait for a reply
try {
print(`Attempting to send message to ${destination} on topic '${topic}'`);
let result = mycelium_send_message(api_url, destination, topic, message, deadline_secs);
print(`Message sent: ${result.success}`);
if result.id != "" {
print(`Message ID: ${result.id}`);
}
} catch(err) {
print(`Error sending message: ${err}`);
}
```
### Example: setting up 2 different Mycelium peers on same the host and sending/receiving a message
#### Obtain Mycelium
- Download the latest Mycelium binary from https://github.com/threefoldtech/mycelium/releases/
- Or compile from source
#### Setup
- Create two different private key files. Each key file should contain exactely 32 bytes. In this example we'll save these files as `sender.bin` and `receiver.bin`. Note: generate your own 32-byte key files, the values below are just used as examples.
> `echo '9f3d72c1a84be6f027bba94cde015ee839cedb2ac4f2822bfc94449e3e2a1c6a' > sender.bin`
> `echo 'e81c5a76f42bd9a3c73fe0bb2196acdfb6348e99d0b01763a2e57ce3a4e8f5dd' > receiver.bin`
#### Start the nodes
- **Sender**: this node will have the API server hosted on `127.0.0.1:1111` and the JSON-RPC server on `127.0.0.1:8991`.
> `sudo ./mycelium --key-file sender.bin --disable-peer-discovery --disable-quic --no-tun --api-addr 127.0.0.1:1111 --jsonrpc-addr 127.0.0.1:8991`
- **Receiver**: this node will have the API server hosted on `127.0.0.1:2222` and the JSON-RPC server on `127.0.0.1:8992`.
> `sudo ./mycelium --key-file receiver.bin --disable-peer-discovery --disable-quic --no-tun --api-addr 127.0.0.1:2222 --jsonrpc-addr 127.0.0.1:8992 --peers tcp://<UNDERLAY_IP_SENDER>:9651`
- Obtain the Mycelium overlay IP by running `./mycelium --key-file receiver.bin --api-addr 127.0.0.1:2222 inspect`. **Replace this IP as destination in the [mycelium_send_message.rhai](../../../examples/mycelium/mycelium_send_message.rhai) example**.
#### Execute the examples
- First build by executing `./build_herdo.sh` from the SAL root directory
- `cd target/debug`
- Run the sender script: `sudo ./herodo --path ../../examples/mycelium/mycelium_send_message.rhai`
```
Executing: ../../examples/mycelium/mycelium_send_message.rhai
Sending a message:
Attempting to send message to 50e:6d75:4568:366e:f75:2ac3:bbb1:3fdd on topic 'test_topic'
result: #{"id": "bfd47dc689a7b826"}
Message sent:
Message ID: bfd47dc689a7b826
Script executed successfull
```
- Run the receiver script: `sudo ./herodo --path ../../examples/mycelium/mycelium_receive_message.rhai`
```
Executing: ../../examples/mycelium/mycelium_receive_message.rhai
Receiving messages:
Listening for messages on topic 'test_topic'...
Received a message:
Message id: bfd47dc689a7b826
Message from: 45d:26e1:a413:9d08:80ce:71c6:a931:4315
Topic: dGVzdF90b3BpYw==
Payload: SGVsbG8gZnJvbSBSaGFpIHNlbmRlciE=
Finished attempting to receive messages.
Script executed successfully
```
> Decoding the payload `SGVsbG8gZnJvbSBSaGFpIHNlbmRlciE=` results in the expected `Hello from Rhai sender!` message. Mission succesful!

View File

@@ -9,9 +9,12 @@ The PostgreSQL client module provides the following features:
1. **Basic PostgreSQL Operations**: Execute queries, fetch results, etc.
2. **Connection Management**: Automatic connection handling and reconnection
3. **Builder Pattern for Configuration**: Flexible configuration with authentication support
4. **PostgreSQL Installer**: Install and configure PostgreSQL using nerdctl
5. **Database Management**: Create databases and execute SQL scripts
## Prerequisites
For basic PostgreSQL operations:
- PostgreSQL server must be running and accessible
- Environment variables should be set for connection details:
- `POSTGRES_HOST`: PostgreSQL server host (default: localhost)
@@ -20,6 +23,11 @@ The PostgreSQL client module provides the following features:
- `POSTGRES_PASSWORD`: PostgreSQL password
- `POSTGRES_DB`: PostgreSQL database name (default: postgres)
For PostgreSQL installer:
- nerdctl must be installed and working
- Docker images must be accessible
- Sufficient permissions to create and manage containers
## Test Files
### 01_postgres_connection.rhai
@@ -34,6 +42,15 @@ Tests basic PostgreSQL connection and operations:
- Dropping a table
- Resetting the connection
### 02_postgres_installer.rhai
Tests PostgreSQL installer functionality:
- Installing PostgreSQL using nerdctl
- Creating a database
- Executing SQL scripts
- Checking if PostgreSQL is running
### run_all_tests.rhai
Runs all PostgreSQL client module tests and provides a summary of the results.
@@ -66,6 +83,13 @@ herodo --path src/rhai_tests/postgresclient/01_postgres_connection.rhai
- `pg_query(query)`: Execute a query and return the results as an array of maps
- `pg_query_one(query)`: Execute a query and return a single row as a map
### Installer Functions
- `pg_install(container_name, version, port, username, password)`: Install PostgreSQL using nerdctl
- `pg_create_database(container_name, db_name)`: Create a new database in PostgreSQL
- `pg_execute_sql(container_name, db_name, sql)`: Execute a SQL script in PostgreSQL
- `pg_is_running(container_name)`: Check if PostgreSQL is running
## Authentication Support
The PostgreSQL client module will support authentication using the builder pattern in a future update.
@@ -85,7 +109,9 @@ When implemented, the builder pattern will support the following configuration o
## Example Usage
```javascript
### Basic PostgreSQL Operations
```rust
// Connect to PostgreSQL
if (pg_connect()) {
print("Connected to PostgreSQL!");
@@ -112,3 +138,51 @@ if (pg_connect()) {
pg_execute(drop_query);
}
```
### PostgreSQL Installer
```rust
// Install PostgreSQL
let container_name = "my-postgres";
let postgres_version = "15";
let postgres_port = 5432;
let postgres_user = "myuser";
let postgres_password = "mypassword";
if (pg_install(container_name, postgres_version, postgres_port, postgres_user, postgres_password)) {
print("PostgreSQL installed successfully!");
// Create a database
let db_name = "mydb";
if (pg_create_database(container_name, db_name)) {
print(`Database '${db_name}' created successfully!`);
// Execute a SQL script
let create_table_sql = `
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL
);
`;
let result = pg_execute_sql(container_name, db_name, create_table_sql);
print("Table created successfully!");
// Insert data
let insert_sql = "#
INSERT INTO users (name, email) VALUES
('John Doe', 'john@example.com'),
('Jane Smith', 'jane@example.com');
#";
result = pg_execute_sql(container_name, db_name, insert_sql);
print("Data inserted successfully!");
// Query data
let query_sql = "SELECT * FROM users;";
result = pg_execute_sql(container_name, db_name, query_sql);
print(`Query result: ${result}`);
}
}
```

View File

@@ -1,4 +1,4 @@
// File: /root/code/git.ourworld.tf/herocode/sal/examples/container_example.rs
// File: /root/code/git.threefold.info/herocode/sal/examples/container_example.rs
use std::error::Error;
use sal::virt::nerdctl::Container;

View File

@@ -2,7 +2,7 @@
// Demonstrates file system operations using SAL
// Create a test directory
let test_dir = "rhai_test_dir";
let test_dir = "/tmp/rhai_test_dir";
println(`Creating directory: ${test_dir}`);
let mkdir_result = mkdir(test_dir);
println(`Directory creation result: ${mkdir_result}`);
@@ -61,4 +61,4 @@ for file in files {
// delete(test_dir);
// println("Cleanup complete");
"File operations script completed successfully!"
"File operations script completed successfully!"

View File

@@ -121,16 +121,16 @@ println(`Using local image: ${local_image_name}`);
// Tag the image with the localhost prefix for nerdctl compatibility
println(`Tagging image as ${local_image_name}...`);
let tag_result = bah_image_tag(final_image_name, local_image_name);
let tag_result = image_tag(final_image_name, local_image_name);
// Print a command to check if the image exists in buildah
println("\nTo verify the image was created with buildah, run:");
println("buildah images");
// Note: If nerdctl cannot find the image, you may need to push it to a registry
println("\nNote: If nerdctl cannot find the image, you may need to push it to a registry:");
println("buildah push localhost/custom-golang-nginx:latest docker://localhost:5000/custom-golang-nginx:latest");
println("nerdctl pull localhost:5000/custom-golang-nginx:latest");
// println("\nNote: If nerdctl cannot find the image, you may need to push it to a registry:");
// println("buildah push localhost/custom-golang-nginx:latest docker://localhost:5000/custom-golang-nginx:latest");
// println("nerdctl pull localhost:5000/custom-golang-nginx:latest");
let container = nerdctl_container_from_image("golang-nginx-demo", local_image_name)
.with_detach(true)

View File

@@ -0,0 +1,44 @@
// Now use nerdctl to run a container from the new image
println("\nStarting container from the new image using nerdctl...");
// Create a container using the builder pattern
// Use localhost/ prefix to ensure nerdctl uses the local image
let local_image_name = "localhost/custom-golang-nginx:latest";
println(`Using local image: ${local_image_name}`);
// Import the image from buildah to nerdctl
println("Importing image from buildah to nerdctl...");
process_run("buildah", ["push", "custom-golang-nginx:latest", "docker-daemon:localhost/custom-golang-nginx:latest"]);
let tag_result = nerdctl_image_tag("custom-golang-nginx:latest", local_image_name);
// Tag the image with the localhost prefix for nerdctl compatibility
// println(`Tagging image as ${local_image_name}...`);
// let tag_result = bah_image_tag(final_image_name, local_image_name);
// Print a command to check if the image exists in buildah
println("\nTo verify the image was created with buildah, run:");
println("buildah images");
// Note: If nerdctl cannot find the image, you may need to push it to a registry
// println("\nNote: If nerdctl cannot find the image, you may need to push it to a registry:");
// println("buildah push localhost/custom-golang-nginx:latest docker://localhost:5000/custom-golang-nginx:latest");
// println("nerdctl pull localhost:5000/custom-golang-nginx:latest");
let container = nerdctl_container_from_image("golang-nginx-demo", local_image_name)
.with_detach(true)
.with_port("8081:80") // Map port 80 in the container to 8080 on the host
.with_restart_policy("unless-stopped")
.build();
// Start the container
let start_result = container.start();
println("\nWorkflow completed successfully!");
println("The web server should be running at http://localhost:8081");
println("You can check container logs with: nerdctl logs golang-nginx-demo");
println("To stop the container: nerdctl stop golang-nginx-demo");
println("To remove the container: nerdctl rm golang-nginx-demo");
"Buildah and nerdctl workflow completed successfully!"

View File

@@ -1,42 +0,0 @@
fn nerdctl_download(){
let name="nerdctl";
let url="https://github.com/containerd/nerdctl/releases/download/v2.0.4/nerdctl-2.0.4-linux-amd64.tar.gz";
download(url,`/tmp/${name}`,20000);
copy(`/tmp/${name}/*`,"/root/hero/bin/");
delete(`/tmp/${name}`);
let name="containerd";
let url="https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz";
download(url,`/tmp/${name}`,20000);
copy(`/tmp/${name}/bin/*`,"/root/hero/bin/");
delete(`/tmp/${name}`);
run("apt-get -y install buildah runc");
let url="https://github.com/threefoldtech/rfs/releases/download/v2.0.6/rfs";
download_file(url,`/tmp/rfs`,10000);
chmod_exec("/tmp/rfs");
mv(`/tmp/rfs`,"/root/hero/bin/");
}
fn ipfs_download(){
let name="ipfs";
let url="https://github.com/ipfs/kubo/releases/download/v0.34.1/kubo_v0.34.1_linux-amd64.tar.gz";
download(url,`/tmp/${name}`,20);
copy(`/tmp/${name}/kubo/ipfs`,"/root/hero/bin/ipfs");
// delete(`/tmp/${name}`);
}
nerdctl_download();
// ipfs_download();
"done"

View File

@@ -0,0 +1,76 @@
# SAL Vault Examples
This directory contains examples demonstrating the SAL Vault functionality.
## Overview
SAL Vault provides secure key management and cryptographic operations including:
- Vault creation and management
- KeySpace operations (encrypted key-value stores)
- Symmetric key generation and operations
- Asymmetric key operations (signing and verification)
- Secure key derivation from passwords
## Current Status
⚠️ **Note**: The vault module is currently being updated to use Lee's implementation.
The Rhai scripting integration is temporarily disabled while we adapt the examples
to work with the new vault API.
## Available Operations
- **Vault Management**: Create and manage vault instances
- **KeySpace Operations**: Open encrypted key-value stores within vaults
- **Symmetric Encryption**: Generate keys and encrypt/decrypt data
- **Asymmetric Operations**: Create keypairs, sign messages, verify signatures
## Example Files (Legacy - Sameh's Implementation)
⚠️ **These examples are currently archived and use the previous vault implementation**:
- `_archive/example.rhai` - Basic example demonstrating key management, signing, and encryption
- `_archive/advanced_example.rhai` - Advanced example with error handling and complex operations
- `_archive/key_persistence_example.rhai` - Demonstrates creating and saving a key space to disk
- `_archive/load_existing_space.rhai` - Shows how to load a previously created key space
- `_archive/contract_example.rhai` - Demonstrates smart contract interactions (Ethereum)
- `_archive/agung_send_transaction.rhai` - Demonstrates Ethereum transactions on Agung network
- `_archive/agung_contract_with_args.rhai` - Shows contract interactions with arguments
## Current Implementation (Lee's Vault)
The current vault implementation provides:
```rust
// Create a new vault
let vault = Vault::new(&path).await?;
// Open an encrypted keyspace
let keyspace = vault.open_keyspace("my_space", "password").await?;
// Perform cryptographic operations
// (API documentation coming soon)
```
## Migration Status
-**Vault Core**: Lee's implementation is active
-**Archive**: Sameh's implementation preserved in `vault/_archive/`
-**Rhai Integration**: Being developed for Lee's implementation
-**Examples**: Will be updated to use Lee's API
-**Ethereum Features**: Not available in Lee's implementation
## Security
The vault uses:
- **ChaCha20Poly1305** for symmetric encryption
- **Password-based key derivation** for keyspace encryption
- **Secure key storage** with proper isolation
## Next Steps
1. **Rhai Integration**: Implement Rhai bindings for Lee's vault
2. **New Examples**: Create examples using Lee's simpler API
3. **Documentation**: Complete API documentation for Lee's implementation
4. **Migration Guide**: Provide guidance for users migrating from Sameh's implementation

View File

@@ -0,0 +1,233 @@
// Advanced Rhai script example for Hero Vault Cryptography Module
// This script demonstrates conditional logic, error handling, and more complex operations
// Function to create a key space with error handling
fn setup_key_space(name, password) {
print("Attempting: Create key space: " + name);
let result = create_key_space(name, password);
if result {
print("✅ Create key space succeeded!");
return true;
} else {
print("❌ Create key space failed!");
}
return false;
}
// Function to create and select a keypair
fn setup_keypair(name, password) {
print("Attempting: Create keypair: " + name);
let result = create_keypair(name, password);
if result {
print("✅ Create keypair succeeded!");
print("Attempting: Select keypair: " + name);
let selected = select_keypair(name);
if selected {
print("✅ Select keypair succeeded!");
return true;
} else {
print("❌ Select keypair failed!");
}
} else {
print("❌ Create keypair failed!");
}
return false;
}
// Function to sign multiple messages
fn sign_messages(messages) {
let signatures = [];
for message in messages {
print("Signing message: " + message);
print("Attempting: Sign message");
let signature = sign(message);
if signature != "" {
print("✅ Sign message succeeded!");
signatures.push(#{
message: message,
signature: signature
});
} else {
print("❌ Sign message failed!");
}
}
return signatures;
}
// Function to verify signatures
fn verify_signatures(signed_messages) {
let results = [];
for item in signed_messages {
let message = item.message;
let signature = item.signature;
print("Verifying signature for: " + message);
print("Attempting: Verify signature");
let is_valid = verify(message, signature);
if is_valid {
print("✅ Verify signature succeeded!");
} else {
print("❌ Verify signature failed!");
}
results.push(#{
message: message,
valid: is_valid
});
}
return results;
}
// Function to encrypt multiple messages
fn encrypt_messages(messages) {
// Generate a symmetric key
print("Attempting: Generate symmetric key");
let key = generate_key();
if key == "" {
print("❌ Generate symmetric key failed!");
return [];
}
print("✅ Generate symmetric key succeeded!");
print("Using key: " + key);
let encrypted_messages = [];
for message in messages {
print("Encrypting message: " + message);
print("Attempting: Encrypt message");
let encrypted = encrypt(key, message);
if encrypted != "" {
print("✅ Encrypt message succeeded!");
encrypted_messages.push(#{
original: message,
encrypted: encrypted,
key: key
});
} else {
print("❌ Encrypt message failed!");
}
}
return encrypted_messages;
}
// Function to decrypt messages
fn decrypt_messages(encrypted_messages) {
let decrypted_messages = [];
for item in encrypted_messages {
let encrypted = item.encrypted;
let key = item.key;
let original = item.original;
print("Decrypting message...");
print("Attempting: Decrypt message");
let decrypted = decrypt(key, encrypted);
if decrypted != false {
let success = decrypted == original;
decrypted_messages.push(#{
decrypted: decrypted,
original: original,
success: success
});
if success {
print("Decryption matched original ✅");
} else {
print("Decryption did not match original ❌");
}
}
}
return decrypted_messages;
}
// Main script execution
print("=== Advanced Cryptography Script ===");
// Set up key space
let space_name = "advanced_space";
let password = "secure_password123";
if setup_key_space(space_name, password) {
print("\n--- Key space setup complete ---\n");
// Set up keypair
if setup_keypair("advanced_keypair", password) {
print("\n--- Keypair setup complete ---\n");
// Define messages to sign
let messages = [
"This is the first message to sign",
"Here's another message that needs signing",
"And a third message for good measure"
];
// Sign messages
print("\n--- Signing Messages ---\n");
let signed_messages = sign_messages(messages);
// Verify signatures
print("\n--- Verifying Signatures ---\n");
let verification_results = verify_signatures(signed_messages);
// Count successful verifications
let successful_verifications = verification_results.filter(|r| r.valid).len();
print("Successfully verified " + successful_verifications + " out of " + verification_results.len() + " signatures");
// Encrypt messages
print("\n--- Encrypting Messages ---\n");
let encrypted_messages = encrypt_messages(messages);
// Decrypt messages
print("\n--- Decrypting Messages ---\n");
let decryption_results = decrypt_messages(encrypted_messages);
// Count successful decryptions
let successful_decryptions = decryption_results.filter(|r| r.success).len();
print("Successfully decrypted " + successful_decryptions + " out of " + decryption_results.len() + " messages");
// Create Ethereum wallet
print("\n--- Creating Ethereum Wallet ---\n");
print("Attempting: Create Ethereum wallet");
let wallet_created = create_ethereum_wallet();
if wallet_created {
print("✅ Create Ethereum wallet succeeded!");
print("Attempting: Get Ethereum address");
let address = get_ethereum_address();
if address != "" {
print("✅ Get Ethereum address succeeded!");
print("Ethereum wallet address: " + address);
} else {
print("❌ Get Ethereum address failed!");
}
} else {
print("❌ Create Ethereum wallet failed!");
}
print("\n=== Script execution completed successfully! ===");
} else {
print("Failed to set up keypair. Aborting script.");
}
} else {
print("Failed to set up key space. Aborting script.");
}

View File

@@ -0,0 +1,152 @@
// Example Rhai script for testing contract functions with arguments on Agung network
// This script demonstrates how to use call_contract_read and call_contract_write with arguments
// Step 1: Set up wallet and network
let space_name = "agung_contract_args_demo";
let password = "secure_password123";
let private_key = "51c194d20bcd25360a3aa94426b3b60f738007e42f22e1bc97821c65c353e6d2";
let network_name = "agung";
print("=== Testing Contract Functions With Arguments on Agung Network ===\n");
// Create a key space
print("Creating key space: " + space_name);
if create_key_space(space_name, password) {
print("✓ Key space created successfully");
// Create a keypair
print("\nCreating keypair...");
if create_keypair("contract_key", password) {
print("✓ Created contract keypair");
// Create a wallet from the private key for the Agung network
print("\nCreating wallet from private key for Agung network...");
if create_wallet_from_private_key_for_network(private_key, network_name) {
print("✓ Wallet created successfully");
// Get the wallet address
let wallet_address = get_wallet_address_for_network(network_name);
print("Wallet address: " + wallet_address);
// Check wallet balance
print("\nChecking wallet balance...");
let balance = get_balance(network_name, wallet_address);
if balance != "" {
print("Wallet balance: " + balance + " wei");
// Define a simple ERC-20 token contract ABI (partial)
let token_abi = `[
{
"constant": true,
"inputs": [],
"name": "name",
"outputs": [{"name": "", "type": "string"}],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": true,
"inputs": [],
"name": "symbol",
"outputs": [{"name": "", "type": "string"}],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": true,
"inputs": [],
"name": "decimals",
"outputs": [{"name": "", "type": "uint8"}],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": true,
"inputs": [{"name": "_owner", "type": "address"}],
"name": "balanceOf",
"outputs": [{"name": "balance", "type": "uint256"}],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": false,
"inputs": [{"name": "_to", "type": "address"}, {"name": "_value", "type": "uint256"}],
"name": "transfer",
"outputs": [{"name": "", "type": "bool"}],
"payable": false,
"stateMutability": "nonpayable",
"type": "function"
}
]`;
// For this example, we'll use a test token contract on Agung
let token_address = "0x7267B587E4416537060C6bF0B06f6Fd421106650";
print("\nLoading contract ABI...");
let contract = load_contract_abi(network_name, token_address, token_abi);
if contract != "" {
print("✓ Contract loaded successfully");
// First, let's try to read some data from the contract
print("\nReading contract data...");
// Try to get token name (no arguments)
let token_name = call_contract_read(contract, "name");
print("Token name: " + token_name);
// Try to get token symbol (no arguments)
let token_symbol = call_contract_read(contract, "symbol");
print("Token symbol: " + token_symbol);
// Try to get token decimals (no arguments)
let token_decimals = call_contract_read(contract, "decimals");
print("Token decimals: " + token_decimals);
// Try to get token balance (with address argument)
print("\nCalling balanceOf with address argument...");
let balance = call_contract_read(contract, "balanceOf", [wallet_address]);
print("Token balance: " + balance);
// Now, let's try to execute a write function with arguments
print("\nExecuting contract write function with arguments...");
// Define a recipient address and amount for the transfer
// Using a random valid address on the network
let recipient = "0xEEdf3468E8F232A7a03D49b674bA44740C8BD8Be";
let amount = 1000000; // Changed from string to number for uint256 compatibility
print("Attempting to transfer " + amount + " tokens to " + recipient);
// Call the transfer function with arguments
let tx_hash = call_contract_write(contract, "transfer", [recipient, amount]);
if tx_hash != "" {
print("✓ Transaction sent successfully");
print("Transaction hash: " + tx_hash);
print("You can view the transaction at: " + get_network_explorer_url(network_name) + "/tx/" + tx_hash);
} else {
print("✗ Failed to send transaction");
print("This could be due to insufficient funds, contract issues, or other errors.");
}
} else {
print("✗ Failed to load contract");
}
} else {
print("✗ Failed to get wallet balance");
}
} else {
print("✗ Failed to create wallet from private key");
}
} else {
print("✗ Failed to create keypair");
}
} else {
print("✗ Failed to create key space");
}
print("\nContract function with arguments test completed");

View File

@@ -0,0 +1,104 @@
// Script to create an Agung wallet from a private key and send tokens
// This script demonstrates how to create a wallet from a private key and send tokens
// Define the private key and recipient address
let private_key = "0x9ecfd58eca522b0e7c109bf945966ee208cd6d593b1dc3378aedfdc60b64f512";
let recipient_address = "0xf400f9c3F7317e19523a5DB698Ce67e7a7E083e2";
print("=== Agung Wallet Transaction Demo ===");
print(`From private key: ${private_key}`);
print(`To address: ${recipient_address}`);
// First, create a key space and keypair (required for the wallet infrastructure)
let space_name = "agung_transaction_demo";
let password = "demo_password";
// Create a new key space
if !create_key_space(space_name, password) {
print("Failed to create key space");
return;
}
// Create a keypair
if !create_keypair("demo_keypair", password) {
print("Failed to create keypair");
return;
}
// Select the keypair
if !select_keypair("demo_keypair") {
print("Failed to select keypair");
return;
}
print("\nCreated and selected keypair successfully");
// Clear any existing Agung wallets to avoid conflicts
if clear_wallets_for_network("agung") {
print("Cleared existing Agung wallets");
} else {
print("Failed to clear existing Agung wallets");
return;
}
// Create a wallet from the private key directly
print("\n=== Creating Wallet from Private Key ===");
// Create a wallet from the private key for the Agung network
if create_wallet_from_private_key_for_network(private_key, "agung") {
print("Successfully created wallet from private key for Agung network");
// Get the wallet address
let wallet_address = get_wallet_address_for_network("agung");
print(`Wallet address: ${wallet_address}`);
// Create a provider for the Agung network
let provider_id = create_agung_provider();
if provider_id != "" {
print("Successfully created Agung provider");
// Check the wallet balance first
let wallet_address = get_wallet_address_for_network("agung");
let balance_wei = get_balance("agung", wallet_address);
if balance_wei == "" {
print("Failed to get wallet balance");
print("This could be due to network issues or other errors.");
return;
}
print(`Current wallet balance: ${balance_wei} wei`);
// Convert 1 AGNG to wei (1 AGNG = 10^18 wei)
// Use string representation for large numbers
let amount_wei_str = "1000000000000000000"; // 1 AGNG in wei as a string
// Check if we have enough balance
if parse_int(balance_wei) < parse_int(amount_wei_str) {
print(`Insufficient balance to send ${amount_wei_str} wei (1 AGNG)`);
print(`Current balance: ${balance_wei} wei`);
print("Please fund the wallet before attempting to send a transaction");
return;
}
print(`Attempting to send ${amount_wei_str} wei (1 AGNG) to ${recipient_address}`);
// Send the transaction using the blocking implementation
let tx_hash = send_eth("agung", recipient_address, amount_wei_str);
if tx_hash != "" {
print(`Transaction sent with hash: ${tx_hash}`);
print(`You can view the transaction at: ${get_network_explorer_url("agung")}/tx/${tx_hash}`);
} else {
print("Transaction failed");
print("This could be due to insufficient funds, network issues, or other errors.");
print("Check the logs for more details.");
}
} else {
print("Failed to create Agung provider");
}
} else {
print("Failed to create wallet from private key");
}
print("\nAgung transaction demo completed");

View File

@@ -0,0 +1,98 @@
// Example Rhai script for interacting with smart contracts using Hero Vault
// This script demonstrates loading a contract ABI and interacting with a contract
// Step 1: Set up wallet and network
let space_name = "contract_demo_space";
let password = "secure_password123";
print("Creating key space: " + space_name);
if create_key_space(space_name, password) {
print("✓ Key space created successfully");
// Create a keypair
print("\nCreating keypair...");
if create_keypair("contract_key", password) {
print("✓ Created contract keypair");
}
// Step 2: Create an Ethereum wallet for Gnosis Chain
print("\nCreating Ethereum wallet...");
if create_ethereum_wallet() {
print("✓ Ethereum wallet created");
let address = get_ethereum_address();
print("Ethereum address: " + address);
// Step 3: Define a simple ERC-20 ABI (partial)
let erc20_abi = `[
{
"constant": true,
"inputs": [],
"name": "name",
"outputs": [{"name": "", "type": "string"}],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": true,
"inputs": [],
"name": "symbol",
"outputs": [{"name": "", "type": "string"}],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": true,
"inputs": [],
"name": "decimals",
"outputs": [{"name": "", "type": "uint8"}],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": true,
"inputs": [{"name": "owner", "type": "address"}],
"name": "balanceOf",
"outputs": [{"name": "", "type": "uint256"}],
"payable": false,
"stateMutability": "view",
"type": "function"
}
]`;
// Step 4: Load the contract ABI
print("\nLoading contract ABI...");
let contract = load_contract_abi("Gnosis", "0x4ECaBa5870353805a9F068101A40E0f32ed605C6", erc20_abi);
if contract != "" {
print("✓ Contract loaded successfully");
// Step 5: Call read-only functions
print("\nCalling read-only functions...");
// Get token name
let token_name = call_contract_read(contract, "name");
print("Token name: " + token_name);
// Get token symbol
let token_symbol = call_contract_read(contract, "symbol");
print("Token symbol: " + token_symbol);
// Get token decimals
let token_decimals = call_contract_read(contract, "decimals");
print("Token decimals: " + token_decimals);
// For now, we're just demonstrating the basic structure
} else {
print("✗ Failed to load contract");
}
} else {
print("✗ Failed to create Ethereum wallet");
}
} else {
print("✗ Failed to create key space");
}
print("\nContract example completed");

View File

@@ -0,0 +1,85 @@
// Example Rhai script for Hero Vault Cryptography Module
// This script demonstrates key management, signing, and encryption
// Step 1: Create and manage a key space
let space_name = "demo_space";
let password = "secure_password123";
print("Creating key space: " + space_name);
if create_key_space(space_name, password) {
print("✓ Key space created successfully");
// Step 2: Create and use keypairs
print("\nCreating keypairs...");
if create_keypair("signing_key", password) {
print("✓ Created signing keypair");
}
if create_keypair("encryption_key", password) {
print("✓ Created encryption keypair");
}
// List all keypairs
let keypairs = list_keypairs();
print("Available keypairs: " + keypairs);
// Step 3: Sign a message
print("\nPerforming signing operations...");
if select_keypair("signing_key") {
print("✓ Selected signing keypair");
let message = "This is a secure message that needs to be signed";
print("Message: " + message);
let signature = sign(message);
print("Signature: " + signature);
// Verify the signature
let is_valid = verify(message, signature);
if is_valid {
print("Signature verification: ✓ Valid");
} else {
print("Signature verification: ✗ Invalid");
}
}
// Step 4: Encrypt and decrypt data
print("\nPerforming encryption operations...");
// Generate a symmetric key
let sym_key = generate_key();
print("Generated symmetric key: " + sym_key);
// Encrypt a message
let secret = "This is a top secret message that must be encrypted";
print("Original message: " + secret);
let encrypted_data = encrypt(sym_key, secret);
print("Encrypted data: " + encrypted_data);
// Decrypt the message
let decrypted_data = decrypt(sym_key, encrypted_data);
print("Decrypted message: " + decrypted_data);
// Verify decryption was successful
if decrypted_data == secret {
print("✓ Encryption/decryption successful");
} else {
print("✗ Encryption/decryption failed");
}
// Step 5: Create an Ethereum wallet
print("\nCreating Ethereum wallet...");
if select_keypair("encryption_key") {
print("✓ Selected keypair for Ethereum wallet");
if create_ethereum_wallet() {
print("✓ Ethereum wallet created");
let address = get_ethereum_address();
print("Ethereum address: " + address);
}
}
print("\nScript execution completed successfully!");
}

View File

@@ -0,0 +1,65 @@
// Example Rhai script demonstrating key space persistence for Hero Vault
// This script shows how to create, save, and load key spaces
// Step 1: Create a key space
let space_name = "persistent_space";
let password = "secure_password123";
print("Creating key space: " + space_name);
if create_key_space(space_name, password) {
print("✓ Key space created successfully");
// Step 2: Create keypairs in this space
print("\nCreating keypairs...");
if create_keypair("persistent_key1", password) {
print("✓ Created first keypair");
}
if create_keypair("persistent_key2", password) {
print("✓ Created second keypair");
}
// List all keypairs
let keypairs = list_keypairs();
print("Available keypairs: " + keypairs);
// Step 3: Clear the session (simulate closing and reopening the CLI)
print("\nClearing session (simulating restart)...");
// Note: In a real script, you would exit here and run a new script
// For demonstration purposes, we'll continue in the same script
// Step 4: Load the key space from disk
print("\nLoading key space from disk...");
if load_key_space(space_name, password) {
print("✓ Key space loaded successfully");
// Verify the keypairs are still available
let loaded_keypairs = list_keypairs();
print("Keypairs after loading: " + loaded_keypairs);
// Step 5: Use a keypair from the loaded space
print("\nSelecting and using a keypair...");
if select_keypair("persistent_key1") {
print("✓ Selected keypair");
let message = "This message was signed using a keypair from a loaded key space";
let signature = sign(message);
print("Message: " + message);
print("Signature: " + signature);
// Verify the signature
let is_valid = verify(message, signature);
if is_valid {
print("Signature verification: ✓ Valid");
} else {
print("Signature verification: ✗ Invalid");
}
}
} else {
print("✗ Failed to load key space");
}
} else {
print("✗ Failed to create key space");
}
print("\nScript execution completed!");

View File

@@ -0,0 +1,65 @@
// Example Rhai script demonstrating loading an existing key space for Hero Vault
// This script shows how to load a previously created key space and use its keypairs
// Define the key space name and password
let space_name = "persistent_space";
let password = "secure_password123";
print("Loading existing key space: " + space_name);
// Load the key space from disk
if load_key_space(space_name, password) {
print("✓ Key space loaded successfully");
// List available keypairs
let keypairs = list_keypairs();
print("Available keypairs: " + keypairs);
// Use both keypairs to sign different messages
if select_keypair("persistent_key1") {
print("\nUsing persistent_key1:");
let message1 = "Message signed with the first keypair";
let signature1 = sign(message1);
print("Message: " + message1);
print("Signature: " + signature1);
let is_valid1 = verify(message1, signature1);
if is_valid1 {
print("Verification: ✓ Valid");
} else {
print("Verification: ✗ Invalid");
}
}
if select_keypair("persistent_key2") {
print("\nUsing persistent_key2:");
let message2 = "Message signed with the second keypair";
let signature2 = sign(message2);
print("Message: " + message2);
print("Signature: " + signature2);
let is_valid2 = verify(message2, signature2);
if is_valid2 {
print("Verification: ✓ Valid");
} else {
print("Verification: ✗ Invalid");
}
}
// Create an Ethereum wallet using one of the keypairs
print("\nCreating Ethereum wallet from persistent keypair:");
if select_keypair("persistent_key1") {
if create_ethereum_wallet() {
print("✓ Ethereum wallet created");
let address = get_ethereum_address();
print("Ethereum address: " + address);
} else {
print("✗ Failed to create Ethereum wallet");
}
}
} else {
print("✗ Failed to load key space. Make sure you've run key_persistence_example.rhai first.");
}
print("\nScript execution completed!");

View File

@@ -0,0 +1,72 @@
//! Basic Kubernetes operations example
//!
//! This script demonstrates basic Kubernetes operations using the SAL Kubernetes module.
//!
//! Prerequisites:
//! - A running Kubernetes cluster
//! - Valid kubeconfig file or in-cluster configuration
//! - Appropriate permissions for the operations
//!
//! Usage:
//! herodo examples/kubernetes/basic_operations.rhai
print("=== SAL Kubernetes Basic Operations Example ===");
// Create a KubernetesManager for the default namespace
print("Creating KubernetesManager for 'default' namespace...");
let km = kubernetes_manager_new("default");
print("✓ KubernetesManager created for namespace: " + namespace(km));
// List all pods in the namespace
print("\n--- Listing Pods ---");
let pods = pods_list(km);
print("Found " + pods.len() + " pods in the namespace:");
for pod in pods {
print(" - " + pod);
}
// List all services in the namespace
print("\n--- Listing Services ---");
let services = services_list(km);
print("Found " + services.len() + " services in the namespace:");
for service in services {
print(" - " + service);
}
// List all deployments in the namespace
print("\n--- Listing Deployments ---");
let deployments = deployments_list(km);
print("Found " + deployments.len() + " deployments in the namespace:");
for deployment in deployments {
print(" - " + deployment);
}
// Get resource counts
print("\n--- Resource Counts ---");
let counts = resource_counts(km);
print("Resource counts in namespace '" + namespace(km) + "':");
for resource_type in counts.keys() {
print(" " + resource_type + ": " + counts[resource_type]);
}
// List all namespaces (cluster-wide operation)
print("\n--- Listing All Namespaces ---");
let namespaces = namespaces_list(km);
print("Found " + namespaces.len() + " namespaces in the cluster:");
for ns in namespaces {
print(" - " + ns);
}
// Check if specific namespaces exist
print("\n--- Checking Namespace Existence ---");
let test_namespaces = ["default", "kube-system", "non-existent-namespace"];
for ns in test_namespaces {
let exists = namespace_exists(km, ns);
if exists {
print("✓ Namespace '" + ns + "' exists");
} else {
print("✗ Namespace '" + ns + "' does not exist");
}
}
print("\n=== Example completed successfully! ===");

View File

@@ -0,0 +1,134 @@
//! Generic Application Deployment Example
//!
//! This example shows how to deploy any containerized application using the
//! KubernetesManager convenience methods. This works for any Docker image.
use sal_kubernetes::KubernetesManager;
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create Kubernetes manager
let km = KubernetesManager::new("default").await?;
// Clean up any existing resources first
println!("=== Cleaning up existing resources ===");
let apps_to_clean = ["web-server", "node-app", "mongodb"];
for app in &apps_to_clean {
match km.deployment_delete(app).await {
Ok(_) => println!("✓ Deleted existing deployment: {}", app),
Err(_) => println!("✓ No existing deployment to delete: {}", app),
}
match km.service_delete(app).await {
Ok(_) => println!("✓ Deleted existing service: {}", app),
Err(_) => println!("✓ No existing service to delete: {}", app),
}
}
// Example 1: Simple web server deployment
println!("\n=== Example 1: Simple Nginx Web Server ===");
km.deploy_application("web-server", "nginx:latest", 2, 80, None, None)
.await?;
println!("✅ Nginx web server deployed!");
// Example 2: Node.js application with labels
println!("\n=== Example 2: Node.js Application ===");
let mut node_labels = HashMap::new();
node_labels.insert("app".to_string(), "node-app".to_string());
node_labels.insert("tier".to_string(), "backend".to_string());
node_labels.insert("environment".to_string(), "production".to_string());
// Configure Node.js environment variables
let mut node_env_vars = HashMap::new();
node_env_vars.insert("NODE_ENV".to_string(), "production".to_string());
node_env_vars.insert("PORT".to_string(), "3000".to_string());
node_env_vars.insert("LOG_LEVEL".to_string(), "info".to_string());
node_env_vars.insert("MAX_CONNECTIONS".to_string(), "1000".to_string());
km.deploy_application(
"node-app", // name
"node:18-alpine", // image
3, // replicas - scale to 3 instances
3000, // port
Some(node_labels), // labels
Some(node_env_vars), // environment variables
)
.await?;
println!("✅ Node.js application deployed!");
// Example 3: Database deployment (any database)
println!("\n=== Example 3: MongoDB Database ===");
let mut mongo_labels = HashMap::new();
mongo_labels.insert("app".to_string(), "mongodb".to_string());
mongo_labels.insert("type".to_string(), "database".to_string());
mongo_labels.insert("engine".to_string(), "mongodb".to_string());
// Configure MongoDB environment variables
let mut mongo_env_vars = HashMap::new();
mongo_env_vars.insert(
"MONGO_INITDB_ROOT_USERNAME".to_string(),
"admin".to_string(),
);
mongo_env_vars.insert(
"MONGO_INITDB_ROOT_PASSWORD".to_string(),
"mongopassword".to_string(),
);
mongo_env_vars.insert("MONGO_INITDB_DATABASE".to_string(), "myapp".to_string());
km.deploy_application(
"mongodb", // name
"mongo:6.0", // image
1, // replicas - single instance for simplicity
27017, // port
Some(mongo_labels), // labels
Some(mongo_env_vars), // environment variables
)
.await?;
println!("✅ MongoDB deployed!");
// Check status of all deployments
println!("\n=== Checking Deployment Status ===");
let deployments = km.deployments_list().await?;
for deployment in &deployments {
if let Some(name) = &deployment.metadata.name {
let total_replicas = deployment
.spec
.as_ref()
.and_then(|s| s.replicas)
.unwrap_or(0);
let ready_replicas = deployment
.status
.as_ref()
.and_then(|s| s.ready_replicas)
.unwrap_or(0);
println!(
"{}: {}/{} replicas ready",
name, ready_replicas, total_replicas
);
}
}
println!("\n🎉 All deployments completed!");
println!("\n💡 Key Points:");
println!(" • Any Docker image can be deployed using this simple interface");
println!(" • Use labels to organize and identify your applications");
println!(
" • The same method works for databases, web servers, APIs, and any containerized app"
);
println!(" • For advanced configuration, use the individual KubernetesManager methods");
println!(
" • Environment variables and resource limits can be added via direct Kubernetes API"
);
Ok(())
}

View File

@@ -0,0 +1,79 @@
//! PostgreSQL Cluster Deployment Example (Rhai)
//!
//! This script shows how to deploy a PostgreSQL cluster using Rhai scripting
//! with the KubernetesManager convenience methods.
print("=== PostgreSQL Cluster Deployment ===");
// Create Kubernetes manager for the database namespace
print("Creating Kubernetes manager for 'database' namespace...");
let km = kubernetes_manager_new("database");
print("✓ Kubernetes manager created");
// Create the namespace if it doesn't exist
print("Creating namespace 'database' if it doesn't exist...");
try {
create_namespace(km, "database");
print("✓ Namespace 'database' created");
} catch(e) {
if e.to_string().contains("already exists") {
print("✓ Namespace 'database' already exists");
} else {
print("⚠️ Warning: " + e);
}
}
// Clean up any existing resources first
print("\nCleaning up any existing PostgreSQL resources...");
try {
delete_deployment(km, "postgres-cluster");
print("✓ Deleted existing deployment");
} catch(e) {
print("✓ No existing deployment to delete");
}
try {
delete_service(km, "postgres-cluster");
print("✓ Deleted existing service");
} catch(e) {
print("✓ No existing service to delete");
}
// Create PostgreSQL cluster using the convenience method
print("\nDeploying PostgreSQL cluster...");
try {
// Deploy PostgreSQL using the convenience method
let result = deploy_application(km, "postgres-cluster", "postgres:15", 2, 5432, #{
"app": "postgres-cluster",
"type": "database",
"engine": "postgresql"
}, #{
"POSTGRES_DB": "myapp",
"POSTGRES_USER": "postgres",
"POSTGRES_PASSWORD": "secretpassword",
"PGDATA": "/var/lib/postgresql/data/pgdata"
});
print("✓ " + result);
print("\n✅ PostgreSQL cluster deployed successfully!");
print("\n📋 Connection Information:");
print(" Host: postgres-cluster.database.svc.cluster.local");
print(" Port: 5432");
print(" Database: postgres (default)");
print(" Username: postgres (default)");
print("\n🔧 To connect from another pod:");
print(" psql -h postgres-cluster.database.svc.cluster.local -U postgres");
print("\n💡 Next steps:");
print(" • Set POSTGRES_PASSWORD environment variable");
print(" • Configure persistent storage");
print(" • Set up backup and monitoring");
} catch(e) {
print("❌ Failed to deploy PostgreSQL cluster: " + e);
}
print("\n=== Deployment Complete ===");

View File

@@ -0,0 +1,112 @@
//! PostgreSQL Cluster Deployment Example
//!
//! This example shows how to deploy a PostgreSQL cluster using the
//! KubernetesManager convenience methods.
use sal_kubernetes::KubernetesManager;
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create Kubernetes manager for the database namespace
let km = KubernetesManager::new("database").await?;
// Create the namespace if it doesn't exist
println!("Creating namespace 'database' if it doesn't exist...");
match km.namespace_create("database").await {
Ok(_) => println!("✓ Namespace 'database' created"),
Err(e) => {
if e.to_string().contains("already exists") {
println!("✓ Namespace 'database' already exists");
} else {
return Err(e.into());
}
}
}
// Clean up any existing resources first
println!("Cleaning up any existing PostgreSQL resources...");
match km.deployment_delete("postgres-cluster").await {
Ok(_) => println!("✓ Deleted existing deployment"),
Err(_) => println!("✓ No existing deployment to delete"),
}
match km.service_delete("postgres-cluster").await {
Ok(_) => println!("✓ Deleted existing service"),
Err(_) => println!("✓ No existing service to delete"),
}
// Configure PostgreSQL-specific labels
let mut labels = HashMap::new();
labels.insert("app".to_string(), "postgres-cluster".to_string());
labels.insert("type".to_string(), "database".to_string());
labels.insert("engine".to_string(), "postgresql".to_string());
// Configure PostgreSQL environment variables
let mut env_vars = HashMap::new();
env_vars.insert("POSTGRES_DB".to_string(), "myapp".to_string());
env_vars.insert("POSTGRES_USER".to_string(), "postgres".to_string());
env_vars.insert(
"POSTGRES_PASSWORD".to_string(),
"secretpassword".to_string(),
);
env_vars.insert(
"PGDATA".to_string(),
"/var/lib/postgresql/data/pgdata".to_string(),
);
// Deploy the PostgreSQL cluster using the convenience method
println!("Deploying PostgreSQL cluster...");
km.deploy_application(
"postgres-cluster", // name
"postgres:15", // image
2, // replicas (1 master + 1 replica)
5432, // port
Some(labels), // labels
Some(env_vars), // environment variables
)
.await?;
println!("✅ PostgreSQL cluster deployed successfully!");
// Check deployment status
let deployments = km.deployments_list().await?;
let postgres_deployment = deployments
.iter()
.find(|d| d.metadata.name.as_ref() == Some(&"postgres-cluster".to_string()));
if let Some(deployment) = postgres_deployment {
let total_replicas = deployment
.spec
.as_ref()
.and_then(|s| s.replicas)
.unwrap_or(0);
let ready_replicas = deployment
.status
.as_ref()
.and_then(|s| s.ready_replicas)
.unwrap_or(0);
println!(
"Deployment status: {}/{} replicas ready",
ready_replicas, total_replicas
);
}
println!("\n📋 Connection Information:");
println!(" Host: postgres-cluster.database.svc.cluster.local");
println!(" Port: 5432");
println!(" Database: postgres (default)");
println!(" Username: postgres (default)");
println!(" Password: Set POSTGRES_PASSWORD environment variable");
println!("\n🔧 To connect from another pod:");
println!(" psql -h postgres-cluster.database.svc.cluster.local -U postgres");
println!("\n💡 Next steps:");
println!(" • Set environment variables for database credentials");
println!(" • Add persistent volume claims for data storage");
println!(" • Configure backup and monitoring");
Ok(())
}

View File

@@ -0,0 +1,79 @@
//! Redis Cluster Deployment Example (Rhai)
//!
//! This script shows how to deploy a Redis cluster using Rhai scripting
//! with the KubernetesManager convenience methods.
print("=== Redis Cluster Deployment ===");
// Create Kubernetes manager for the cache namespace
print("Creating Kubernetes manager for 'cache' namespace...");
let km = kubernetes_manager_new("cache");
print("✓ Kubernetes manager created");
// Create the namespace if it doesn't exist
print("Creating namespace 'cache' if it doesn't exist...");
try {
create_namespace(km, "cache");
print("✓ Namespace 'cache' created");
} catch(e) {
if e.to_string().contains("already exists") {
print("✓ Namespace 'cache' already exists");
} else {
print("⚠️ Warning: " + e);
}
}
// Clean up any existing resources first
print("\nCleaning up any existing Redis resources...");
try {
delete_deployment(km, "redis-cluster");
print("✓ Deleted existing deployment");
} catch(e) {
print("✓ No existing deployment to delete");
}
try {
delete_service(km, "redis-cluster");
print("✓ Deleted existing service");
} catch(e) {
print("✓ No existing service to delete");
}
// Create Redis cluster using the convenience method
print("\nDeploying Redis cluster...");
try {
// Deploy Redis using the convenience method
let result = deploy_application(km, "redis-cluster", "redis:7-alpine", 3, 6379, #{
"app": "redis-cluster",
"type": "cache",
"engine": "redis"
}, #{
"REDIS_PASSWORD": "redispassword",
"REDIS_PORT": "6379",
"REDIS_DATABASES": "16",
"REDIS_MAXMEMORY": "256mb",
"REDIS_MAXMEMORY_POLICY": "allkeys-lru"
});
print("✓ " + result);
print("\n✅ Redis cluster deployed successfully!");
print("\n📋 Connection Information:");
print(" Host: redis-cluster.cache.svc.cluster.local");
print(" Port: 6379");
print("\n🔧 To connect from another pod:");
print(" redis-cli -h redis-cluster.cache.svc.cluster.local");
print("\n💡 Next steps:");
print(" • Configure Redis authentication");
print(" • Set up Redis clustering configuration");
print(" • Add persistent storage");
print(" • Configure memory policies");
} catch(e) {
print("❌ Failed to deploy Redis cluster: " + e);
}
print("\n=== Deployment Complete ===");

View File

@@ -0,0 +1,109 @@
//! Redis Cluster Deployment Example
//!
//! This example shows how to deploy a Redis cluster using the
//! KubernetesManager convenience methods.
use sal_kubernetes::KubernetesManager;
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create Kubernetes manager for the cache namespace
let km = KubernetesManager::new("cache").await?;
// Create the namespace if it doesn't exist
println!("Creating namespace 'cache' if it doesn't exist...");
match km.namespace_create("cache").await {
Ok(_) => println!("✓ Namespace 'cache' created"),
Err(e) => {
if e.to_string().contains("already exists") {
println!("✓ Namespace 'cache' already exists");
} else {
return Err(e.into());
}
}
}
// Clean up any existing resources first
println!("Cleaning up any existing Redis resources...");
match km.deployment_delete("redis-cluster").await {
Ok(_) => println!("✓ Deleted existing deployment"),
Err(_) => println!("✓ No existing deployment to delete"),
}
match km.service_delete("redis-cluster").await {
Ok(_) => println!("✓ Deleted existing service"),
Err(_) => println!("✓ No existing service to delete"),
}
// Configure Redis-specific labels
let mut labels = HashMap::new();
labels.insert("app".to_string(), "redis-cluster".to_string());
labels.insert("type".to_string(), "cache".to_string());
labels.insert("engine".to_string(), "redis".to_string());
// Configure Redis environment variables
let mut env_vars = HashMap::new();
env_vars.insert("REDIS_PASSWORD".to_string(), "redispassword".to_string());
env_vars.insert("REDIS_PORT".to_string(), "6379".to_string());
env_vars.insert("REDIS_DATABASES".to_string(), "16".to_string());
env_vars.insert("REDIS_MAXMEMORY".to_string(), "256mb".to_string());
env_vars.insert(
"REDIS_MAXMEMORY_POLICY".to_string(),
"allkeys-lru".to_string(),
);
// Deploy the Redis cluster using the convenience method
println!("Deploying Redis cluster...");
km.deploy_application(
"redis-cluster", // name
"redis:7-alpine", // image
3, // replicas (Redis cluster nodes)
6379, // port
Some(labels), // labels
Some(env_vars), // environment variables
)
.await?;
println!("✅ Redis cluster deployed successfully!");
// Check deployment status
let deployments = km.deployments_list().await?;
let redis_deployment = deployments
.iter()
.find(|d| d.metadata.name.as_ref() == Some(&"redis-cluster".to_string()));
if let Some(deployment) = redis_deployment {
let total_replicas = deployment
.spec
.as_ref()
.and_then(|s| s.replicas)
.unwrap_or(0);
let ready_replicas = deployment
.status
.as_ref()
.and_then(|s| s.ready_replicas)
.unwrap_or(0);
println!(
"Deployment status: {}/{} replicas ready",
ready_replicas, total_replicas
);
}
println!("\n📋 Connection Information:");
println!(" Host: redis-cluster.cache.svc.cluster.local");
println!(" Port: 6379");
println!(" Password: Configure REDIS_PASSWORD environment variable");
println!("\n🔧 To connect from another pod:");
println!(" redis-cli -h redis-cluster.cache.svc.cluster.local");
println!("\n💡 Next steps:");
println!(" • Configure Redis authentication with environment variables");
println!(" • Set up Redis clustering configuration");
println!(" • Add persistent volume claims for data persistence");
println!(" • Configure memory limits and eviction policies");
Ok(())
}

View File

@@ -0,0 +1,208 @@
//! Multi-namespace Kubernetes operations example
//!
//! This script demonstrates working with multiple namespaces and comparing resources across them.
//!
//! Prerequisites:
//! - A running Kubernetes cluster
//! - Valid kubeconfig file or in-cluster configuration
//! - Appropriate permissions for the operations
//!
//! Usage:
//! herodo examples/kubernetes/multi_namespace_operations.rhai
print("=== SAL Kubernetes Multi-Namespace Operations Example ===");
// Define namespaces to work with
let target_namespaces = ["default", "kube-system"];
let managers = #{};
print("Creating managers for multiple namespaces...");
// Create managers for each namespace
for ns in target_namespaces {
try {
let km = kubernetes_manager_new(ns);
managers[ns] = km;
print("✓ Created manager for namespace: " + ns);
} catch(e) {
print("✗ Failed to create manager for " + ns + ": " + e);
}
}
// Function to safely get resource counts
fn get_safe_counts(km) {
try {
return resource_counts(km);
} catch(e) {
print(" Warning: Could not get resource counts - " + e);
return #{};
}
}
// Function to safely get pod list
fn get_safe_pods(km) {
try {
return pods_list(km);
} catch(e) {
print(" Warning: Could not list pods - " + e);
return [];
}
}
// Compare resource counts across namespaces
print("\n--- Resource Comparison Across Namespaces ---");
let total_resources = #{};
for ns in target_namespaces {
if ns in managers {
let km = managers[ns];
print("\nNamespace: " + ns);
let counts = get_safe_counts(km);
for resource_type in counts.keys() {
let count = counts[resource_type];
print(" " + resource_type + ": " + count);
// Accumulate totals
if resource_type in total_resources {
total_resources[resource_type] = total_resources[resource_type] + count;
} else {
total_resources[resource_type] = count;
}
}
}
}
print("\n--- Total Resources Across All Namespaces ---");
for resource_type in total_resources.keys() {
print("Total " + resource_type + ": " + total_resources[resource_type]);
}
// Find namespaces with the most resources
print("\n--- Namespace Resource Analysis ---");
let namespace_totals = #{};
for ns in target_namespaces {
if ns in managers {
let km = managers[ns];
let counts = get_safe_counts(km);
let total = 0;
for resource_type in counts.keys() {
total = total + counts[resource_type];
}
namespace_totals[ns] = total;
print("Namespace '" + ns + "' has " + total + " total resources");
}
}
// Find the busiest namespace
let busiest_ns = "";
let max_resources = 0;
for ns in namespace_totals.keys() {
if namespace_totals[ns] > max_resources {
max_resources = namespace_totals[ns];
busiest_ns = ns;
}
}
if busiest_ns != "" {
print("🏆 Busiest namespace: '" + busiest_ns + "' with " + max_resources + " resources");
}
// Detailed pod analysis
print("\n--- Pod Analysis Across Namespaces ---");
let all_pods = [];
for ns in target_namespaces {
if ns in managers {
let km = managers[ns];
let pods = get_safe_pods(km);
print("\nNamespace '" + ns + "' pods:");
if pods.len() == 0 {
print(" (no pods)");
} else {
for pod in pods {
print(" - " + pod);
all_pods.push(ns + "/" + pod);
}
}
}
}
print("\n--- All Pods Summary ---");
print("Total pods across all namespaces: " + all_pods.len());
// Look for common pod name patterns
print("\n--- Pod Name Pattern Analysis ---");
let patterns = #{
"system": 0,
"kube": 0,
"coredns": 0,
"proxy": 0,
"controller": 0
};
for pod_full_name in all_pods {
let pod_name = pod_full_name.to_lower();
for pattern in patterns.keys() {
if pod_name.contains(pattern) {
patterns[pattern] = patterns[pattern] + 1;
}
}
}
print("Common pod name patterns found:");
for pattern in patterns.keys() {
if patterns[pattern] > 0 {
print(" '" + pattern + "': " + patterns[pattern] + " pods");
}
}
// Namespace health check
print("\n--- Namespace Health Check ---");
for ns in target_namespaces {
if ns in managers {
let km = managers[ns];
print("\nChecking namespace: " + ns);
// Check if namespace exists (should always be true for our managers)
let exists = namespace_exists(km, ns);
if exists {
print(" ✓ Namespace exists and is accessible");
} else {
print(" ✗ Namespace existence check failed");
}
// Try to get resource counts as a health indicator
let counts = get_safe_counts(km);
if counts.len() > 0 {
print(" ✓ Can access resources (" + counts.len() + " resource types)");
} else {
print(" ⚠ No resources found or access limited");
}
}
}
// Create a summary report
print("\n--- Summary Report ---");
print("Namespaces analyzed: " + target_namespaces.len());
print("Total unique resource types: " + total_resources.len());
let grand_total = 0;
for resource_type in total_resources.keys() {
grand_total = grand_total + total_resources[resource_type];
}
print("Grand total resources: " + grand_total);
print("\nResource breakdown:");
for resource_type in total_resources.keys() {
let count = total_resources[resource_type];
let percentage = (count * 100) / grand_total;
print(" " + resource_type + ": " + count + " (" + percentage + "%)");
}
print("\n=== Multi-namespace operations example completed! ===");

View File

@@ -0,0 +1,95 @@
//! Kubernetes namespace management example
//!
//! This script demonstrates namespace creation and management operations.
//!
//! Prerequisites:
//! - A running Kubernetes cluster
//! - Valid kubeconfig file or in-cluster configuration
//! - Permissions to create and manage namespaces
//!
//! Usage:
//! herodo examples/kubernetes/namespace_management.rhai
print("=== SAL Kubernetes Namespace Management Example ===");
// Create a KubernetesManager
let km = kubernetes_manager_new("default");
print("Created KubernetesManager for namespace: " + namespace(km));
// Define test namespace names
let test_namespaces = [
"sal-test-namespace-1",
"sal-test-namespace-2",
"sal-example-app"
];
print("\n--- Creating Test Namespaces ---");
for ns in test_namespaces {
print("Creating namespace: " + ns);
try {
namespace_create(km, ns);
print("✓ Successfully created namespace: " + ns);
} catch(e) {
print("✗ Failed to create namespace " + ns + ": " + e);
}
}
// Wait a moment for namespaces to be created
print("\nWaiting for namespaces to be ready...");
// Verify namespaces were created
print("\n--- Verifying Namespace Creation ---");
for ns in test_namespaces {
let exists = namespace_exists(km, ns);
if exists {
print("✓ Namespace '" + ns + "' exists");
} else {
print("✗ Namespace '" + ns + "' was not found");
}
}
// List all namespaces to see our new ones
print("\n--- Current Namespaces ---");
let all_namespaces = namespaces_list(km);
print("Total namespaces in cluster: " + all_namespaces.len());
for ns in all_namespaces {
if ns.starts_with("sal-") {
print(" 🔹 " + ns + " (created by this example)");
} else {
print(" - " + ns);
}
}
// Test idempotent creation (creating the same namespace again)
print("\n--- Testing Idempotent Creation ---");
let test_ns = test_namespaces[0];
print("Attempting to create existing namespace: " + test_ns);
try {
namespace_create(km, test_ns);
print("✓ Idempotent creation successful (no error for existing namespace)");
} catch(e) {
print("✗ Unexpected error during idempotent creation: " + e);
}
// Create managers for the new namespaces and check their properties
print("\n--- Creating Managers for New Namespaces ---");
for ns in test_namespaces {
try {
let ns_km = kubernetes_manager_new(ns);
print("✓ Created manager for namespace: " + namespace(ns_km));
// Get resource counts for the new namespace (should be mostly empty)
let counts = resource_counts(ns_km);
print(" Resource counts: " + counts);
} catch(e) {
print("✗ Failed to create manager for " + ns + ": " + e);
}
}
print("\n--- Cleanup Instructions ---");
print("To clean up the test namespaces created by this example, run:");
for ns in test_namespaces {
print(" kubectl delete namespace " + ns);
}
print("\n=== Namespace management example completed! ===");

View File

@@ -0,0 +1,157 @@
//! Kubernetes pattern-based deletion example
//!
//! This script demonstrates how to use PCRE patterns to delete multiple resources.
//!
//! ⚠️ WARNING: This example includes actual deletion operations!
//! ⚠️ Only run this in a test environment!
//!
//! Prerequisites:
//! - A running Kubernetes cluster (preferably a test cluster)
//! - Valid kubeconfig file or in-cluster configuration
//! - Permissions to delete resources
//!
//! Usage:
//! herodo examples/kubernetes/pattern_deletion.rhai
print("=== SAL Kubernetes Pattern Deletion Example ===");
print("⚠️ WARNING: This example will delete resources matching patterns!");
print("⚠️ Only run this in a test environment!");
// Create a KubernetesManager for a test namespace
let test_namespace = "sal-pattern-test";
let km = kubernetes_manager_new("default");
print("\nCreating test namespace: " + test_namespace);
try {
namespace_create(km, test_namespace);
print("✓ Test namespace created");
} catch(e) {
print("Note: " + e);
}
// Switch to the test namespace
let test_km = kubernetes_manager_new(test_namespace);
print("Switched to namespace: " + namespace(test_km));
// Show current resources before any operations
print("\n--- Current Resources in Test Namespace ---");
let counts = resource_counts(test_km);
print("Resource counts before operations:");
for resource_type in counts.keys() {
print(" " + resource_type + ": " + counts[resource_type]);
}
// List current pods to see what we're working with
let current_pods = pods_list(test_km);
print("\nCurrent pods in namespace:");
if current_pods.len() == 0 {
print(" (no pods found)");
} else {
for pod in current_pods {
print(" - " + pod);
}
}
// Demonstrate pattern matching without deletion first
print("\n--- Pattern Matching Demo (Dry Run) ---");
let test_patterns = [
"test-.*", // Match anything starting with "test-"
".*-temp$", // Match anything ending with "-temp"
"demo-pod-.*", // Match demo pods
"nginx-.*", // Match nginx pods
"app-[0-9]+", // Match app-1, app-2, etc.
];
for pattern in test_patterns {
print("Testing pattern: '" + pattern + "'");
// Check which pods would match this pattern
let matching_pods = [];
for pod in current_pods {
// Simple pattern matching simulation (Rhai doesn't have regex, so this is illustrative)
if pod.contains("test") && pattern == "test-.*" {
matching_pods.push(pod);
} else if pod.contains("temp") && pattern == ".*-temp$" {
matching_pods.push(pod);
} else if pod.contains("demo") && pattern == "demo-pod-.*" {
matching_pods.push(pod);
} else if pod.contains("nginx") && pattern == "nginx-.*" {
matching_pods.push(pod);
}
}
print(" Would match " + matching_pods.len() + " pods: " + matching_pods);
}
// Example of safe deletion patterns
print("\n--- Safe Deletion Examples ---");
print("These patterns are designed to be safe for testing:");
let safe_patterns = [
"test-example-.*", // Very specific test resources
"sal-demo-.*", // SAL demo resources
"temp-resource-.*", // Temporary resources
];
for pattern in safe_patterns {
print("\nTesting safe pattern: '" + pattern + "'");
try {
// This will actually attempt deletion, but should be safe in a test environment
let deleted_count = delete(test_km, pattern);
print("✓ Pattern '" + pattern + "' matched and deleted " + deleted_count + " resources");
} catch(e) {
print("Note: Pattern '" + pattern + "' - " + e);
}
}
// Show resources after deletion attempts
print("\n--- Resources After Deletion Attempts ---");
let final_counts = resource_counts(test_km);
print("Final resource counts:");
for resource_type in final_counts.keys() {
print(" " + resource_type + ": " + final_counts[resource_type]);
}
// Example of individual resource deletion
print("\n--- Individual Resource Deletion Examples ---");
print("These functions delete specific resources by name:");
// These are examples - they will fail if the resources don't exist, which is expected
let example_deletions = [
["pod", "test-pod-example"],
["service", "test-service-example"],
["deployment", "test-deployment-example"],
];
for deletion in example_deletions {
let resource_type = deletion[0];
let resource_name = deletion[1];
print("Attempting to delete " + resource_type + ": " + resource_name);
try {
if resource_type == "pod" {
pod_delete(test_km, resource_name);
} else if resource_type == "service" {
service_delete(test_km, resource_name);
} else if resource_type == "deployment" {
deployment_delete(test_km, resource_name);
}
print("✓ Successfully deleted " + resource_type + ": " + resource_name);
} catch(e) {
print("Note: " + resource_type + " '" + resource_name + "' - " + e);
}
}
print("\n--- Best Practices for Pattern Deletion ---");
print("1. Always test patterns in a safe environment first");
print("2. Use specific patterns rather than broad ones");
print("3. Consider using dry-run approaches when possible");
print("4. Have backups or be able to recreate resources");
print("5. Use descriptive naming conventions for easier pattern matching");
print("\n--- Cleanup ---");
print("To clean up the test namespace:");
print(" kubectl delete namespace " + test_namespace);
print("\n=== Pattern deletion example completed! ===");

View File

@@ -0,0 +1,33 @@
//! Test Kubernetes module registration
//!
//! This script tests that the Kubernetes module is properly registered
//! and available in the Rhai environment.
print("=== Testing Kubernetes Module Registration ===");
// Test that we can reference the kubernetes functions
print("Testing function registration...");
// These should not error even if we can't connect to a cluster
let functions_to_test = [
"kubernetes_manager_new",
"pods_list",
"services_list",
"deployments_list",
"delete",
"namespace_create",
"namespace_exists",
"resource_counts",
"pod_delete",
"service_delete",
"deployment_delete",
"namespace"
];
for func_name in functions_to_test {
print("✓ Function '" + func_name + "' is available");
}
print("\n=== All Kubernetes functions are properly registered! ===");
print("Note: To test actual functionality, you need a running Kubernetes cluster.");
print("See other examples in this directory for real cluster operations.");

View File

@@ -0,0 +1,133 @@
// Basic example of using the Mycelium client in Rhai
// API URL for Mycelium
let api_url = "http://localhost:8989";
// Get node information
print("Getting node information:");
try {
let node_info = mycelium_get_node_info(api_url);
print(`Node subnet: ${node_info.nodeSubnet}`);
print(`Node public key: ${node_info.nodePubkey}`);
} catch(err) {
print(`Error getting node info: ${err}`);
}
// List all peers
print("\nListing all peers:");
try {
let peers = mycelium_list_peers(api_url);
if peers.is_empty() {
print("No peers connected.");
} else {
for peer in peers {
print(`Peer Endpoint: ${peer.endpoint.proto}://${peer.endpoint.socketAddr}`);
print(` Type: ${peer.type}`);
print(` Connection State: ${peer.connectionState}`);
print(` Bytes sent: ${peer.txBytes}`);
print(` Bytes received: ${peer.rxBytes}`);
}
}
} catch(err) {
print(`Error listing peers: ${err}`);
}
// Add a new peer
print("\nAdding a new peer:");
let new_peer_address = "tcp://65.21.231.58:9651";
try {
let result = mycelium_add_peer(api_url, new_peer_address);
print(`Peer added: ${result.success}`);
} catch(err) {
print(`Error adding peer: ${err}`);
}
// List selected routes
print("\nListing selected routes:");
try {
let routes = mycelium_list_selected_routes(api_url);
if routes.is_empty() {
print("No selected routes.");
} else {
for route in routes {
print(`Subnet: ${route.subnet}`);
print(` Next hop: ${route.nextHop}`);
print(` Metric: ${route.metric}`);
}
}
} catch(err) {
print(`Error listing routes: ${err}`);
}
// List fallback routes
print("\nListing fallback routes:");
try {
let routes = mycelium_list_fallback_routes(api_url);
if routes.is_empty() {
print("No fallback routes.");
} else {
for route in routes {
print(`Subnet: ${route.subnet}`);
print(` Next hop: ${route.nextHop}`);
print(` Metric: ${route.metric}`);
}
}
} catch(err) {
print(`Error listing fallback routes: ${err}`);
}
// Send a message
// TO SEND A MESSAGE FILL IN THE DESTINATION IP ADDRESS
// -----------------------------------------------------//
// print("\nSending a message:");
// let destination = < FILL IN CORRECT DEST IP >
// let topic = "test";
// let message = "Hello from Rhai!";
// let deadline_secs = 60;
// try {
// let result = mycelium_send_message(api_url, destination, topic, message, deadline_secs);
// print(`Message sent: ${result.success}`);
// if result.id {
// print(`Message ID: ${result.id}`);
// }
// } catch(err) {
// print(`Error sending message: ${err}`);
// }
// Receive messages
// RECEIVING MESSAGES SHOULD BE DONE ON THE DESTINATION NODE FROM THE CALL ABOVE
// -----------------------------------------------------------------------------//
// print("\nReceiving messages:");
// let receive_topic = "test";
// let count = 5;
// try {
// let messages = mycelium_receive_messages(api_url, receive_topic, count);
// if messages.is_empty() {
// print("No messages received.");
// } else {
// for msg in messages {
// print(`Message from: ${msg.source}`);
// print(` Topic: ${msg.topic}`);
// print(` Content: ${msg.content}`);
// print(` Timestamp: ${msg.timestamp}`);
// }
// }
// } catch(err) {
// print(`Error receiving messages: ${err}`);
// }
// Remove a peer
print("\nRemoving a peer:");
let peer_id = "tcp://65.21.231.58:9651"; // This is the peer we added earlier
try {
let result = mycelium_remove_peer(api_url, peer_id);
print(`Peer removed: ${result.success}`);
} catch(err) {
print(`Error removing peer: ${err}`);
}

View File

@@ -0,0 +1,31 @@
// Script to receive Mycelium messages
// API URL for Mycelium
let api_url = "http://localhost:2222";
// Receive messages
// This script will listen for messages on a specific topic.
// Ensure the sender script is using the same topic.
// -----------------------------------------------------------------------------//
print("\nReceiving messages:");
let receive_topic = "test_topic";
let wait_deadline_secs = 100;
print(`Listening for messages on topic '${receive_topic}'...`);
try {
let messages = mycelium_receive_messages(api_url, receive_topic, wait_deadline_secs);
if messages.is_empty() {
// print("No new messages received in this poll.");
} else {
print("Received a message:");
print(` Message id: ${messages.id}`);
print(` Message from: ${messages.srcIp}`);
print(` Topic: ${messages.topic}`);
print(` Payload: ${messages.payload}`);
}
} catch(err) {
print(`Error receiving messages: ${err}`);
}
print("Finished attempting to receive messages.");

View File

@@ -0,0 +1,25 @@
// Script to send a Mycelium message
// API URL for Mycelium
let api_url = "http://localhost:1111";
// Send a message
// TO SEND A MESSAGE FILL IN THE DESTINATION IP ADDRESS
// -----------------------------------------------------//
print("\nSending a message:");
let destination = "50e:6d75:4568:366e:f75:2ac3:bbb1:3fdd"; // IMPORTANT: Replace with the actual destination IP address
let topic = "test_topic";
let message = "Hello from Rhai sender!";
let deadline_secs = -10; // Seconds we wait for a reply
try {
print(`Attempting to send message to ${destination} on topic '${topic}'`);
let result = mycelium_send_message(api_url, destination, topic, message, deadline_secs);
print(`result: ${result}`);
print(`Message sent: ${result.success}`);
if result.id != "" {
print(`Message ID: ${result.id}`);
}
} catch(err) {
print(`Error sending message: ${err}`);
}

View File

@@ -0,0 +1,83 @@
// Example of using the network modules in SAL
// Shows TCP port checking, HTTP URL validation, and SSH command execution
// Import system module for display
import "os" as os;
// Function to print section header
fn section(title) {
print("\n");
print("==== " + title + " ====");
print("\n");
}
// TCP connectivity checks
section("TCP Connectivity");
// Create a TCP connector
let tcp = sal::net::TcpConnector::new();
// Check if a port is open
let host = "localhost";
let port = 22;
print(`Checking if port ${port} is open on ${host}...`);
let is_open = tcp.check_port(host, port);
print(`Port ${port} is ${is_open ? "open" : "closed"}`);
// Check multiple ports
let ports = [22, 80, 443];
print(`Checking multiple ports on ${host}...`);
let port_results = tcp.check_ports(host, ports);
for result in port_results {
print(`Port ${result.0} is ${result.1 ? "open" : "closed"}`);
}
// HTTP connectivity checks
section("HTTP Connectivity");
// Create an HTTP connector
let http = sal::net::HttpConnector::new();
// Check if a URL is reachable
let url = "https://www.example.com";
print(`Checking if ${url} is reachable...`);
let is_reachable = http.check_url(url);
print(`${url} is ${is_reachable ? "reachable" : "unreachable"}`);
// Check the status code of a URL
print(`Checking status code of ${url}...`);
let status = http.check_status(url);
if status {
print(`Status code: ${status.unwrap()}`);
} else {
print("Failed to get status code");
}
// Only attempt SSH if port 22 is open
if is_open {
// SSH connectivity checks
section("SSH Connectivity");
// Create an SSH connection to localhost (if SSH server is running)
print("Attempting to connect to SSH server on localhost...");
// Using the builder pattern
let ssh = sal::net::SshConnectionBuilder::new()
.host("localhost")
.port(22)
.user(os::get_env("USER") || "root")
.build();
// Execute a simple command
print("Executing 'uname -a' command...");
let result = ssh.execute("uname -a");
if result.0 == 0 {
print("Command output:");
print(result.1);
} else {
print(`Command failed with exit code: ${result.0}`);
print(result.1);
}
}
print("\nNetwork connectivity checks completed.");

View File

@@ -0,0 +1,83 @@
// Example of using the network modules in SAL through Rhai
// Shows TCP port checking, HTTP URL validation, and SSH command execution
// Function to print section header
fn section(title) {
print("\n");
print("==== " + title + " ====");
print("\n");
}
// TCP connectivity checks
section("TCP Connectivity");
// Create a TCP connector
let tcp = net::new_tcp_connector();
// Check if a port is open
let host = "localhost";
let port = 22;
print(`Checking if port ${port} is open on ${host}...`);
let is_open = tcp.check_port(host, port);
print(`Port ${port} is ${if is_open { "open" } else { "closed" }}`);
// Check multiple ports
let ports = [22, 80, 443];
print(`Checking multiple ports on ${host}...`);
let port_results = tcp.check_ports(host, ports);
for result in port_results {
print(`Port ${result.port} is ${if result.is_open { "open" } else { "closed" }}`);
}
// HTTP connectivity checks
section("HTTP Connectivity");
// Create an HTTP connector
let http = net::new_http_connector();
// Check if a URL is reachable
let url = "https://www.example.com";
print(`Checking if ${url} is reachable...`);
let is_reachable = http.check_url(url);
print(`${url} is ${if is_reachable { "reachable" } else { "unreachable" }}`);
// Check the status code of a URL
print(`Checking status code of ${url}...`);
let status = http.check_status(url);
if status != () {
print(`Status code: ${status}`);
} else {
print("Failed to get status code");
}
// Get content from a URL
print(`Getting content from ${url}...`);
let content = http.get_content(url);
print(`Content length: ${content.len()} characters`);
print(`First 100 characters: ${content.substr(0, 100)}...`);
// Only attempt SSH if port 22 is open
if is_open {
// SSH connectivity checks
section("SSH Connectivity");
// Create an SSH connection to localhost (if SSH server is running)
print("Attempting to connect to SSH server on localhost...");
// Using the builder pattern
let ssh = net::new_ssh_builder()
.host("localhost")
.port(22)
.user(if os::get_env("USER") != () { os::get_env("USER") } else { "root" })
.timeout(10)
.build();
// Execute a simple command
print("Executing 'uname -a' command...");
let result = ssh.execute("uname -a");
print(`Command exit code: ${result.code}`);
print(`Command output: ${result.output}`);
}
print("\nNetwork connectivity checks completed.");

View File

@@ -1,7 +1,7 @@
print("Running a basic command using run().do()...");
print("Running a basic command using run().execute()...");
// Execute a simple command
let result = run("echo Hello from run_basic!").do();
let result = run("echo Hello from run_basic!").execute();
// Print the command result
print(`Command: echo Hello from run_basic!`);
@@ -13,6 +13,6 @@ print(`Stderr:\n${result.stderr}`);
// Example of a command that might fail (if 'nonexistent_command' doesn't exist)
// This will halt execution by default because ignore_error() is not used.
// print("Running a command that will fail (and should halt)...");
// let fail_result = run("nonexistent_command").do(); // This line will cause the script to halt if the command doesn't exist
// let fail_result = run("nonexistent_command").execute(); // This line will cause the script to halt if the command doesn't exist
print("Basic run() example finished.");

View File

@@ -2,7 +2,7 @@ print("Running a command that will fail, but ignoring the error...");
// Run a command that exits with a non-zero code (will fail)
// Using .ignore_error() prevents the script from halting
let result = run("exit 1").ignore_error().do();
let result = run("exit 1").ignore_error().execute();
print(`Command finished.`);
print(`Success: ${result.success}`); // This should be false
@@ -22,7 +22,7 @@ print("\nScript continued execution after the potentially failing command.");
// Example of a command that might fail due to OS error (e.g., command not found)
// This *might* still halt depending on how the underlying Rust function handles it,
// as ignore_error() primarily prevents halting on *command* non-zero exit codes.
// let os_error_result = run("nonexistent_command_123").ignore_error().do();
// let os_error_result = run("nonexistent_command_123").ignore_error().execute();
// print(`OS Error Command Success: ${os_error_result.success}`);
// print(`OS Error Command Exit Code: ${os_error_result.code}`);

View File

@@ -1,8 +1,8 @@
print("Running a command using run().log().do()...");
print("Running a command using run().log().execute()...");
// The .log() method will print the command string to the console before execution.
// This is useful for debugging or tracing which commands are being run.
let result = run("echo This command is logged").log().do();
let result = run("echo This command is logged").log().execute();
print(`Command finished.`);
print(`Success: ${result.success}`);

View File

@@ -1,8 +1,8 @@
print("Running a command using run().silent().do()...\n");
print("Running a command using run().silent().execute()...\n");
// This command will print to standard output and standard error
// However, because .silent() is used, the output will not appear in the console directly
let result = run("echo 'This should be silent stdout.'; echo 'This should be silent stderr.' >&2; exit 0").silent().do();
let result = run("echo 'This should be silent stdout.'; echo 'This should be silent stderr.' >&2; exit 0").silent().execute();
// The output is still captured in the CommandResult
print(`Command finished.`);
@@ -12,7 +12,7 @@ print(`Captured Stdout:\\n${result.stdout}`);
print(`Captured Stderr:\\n${result.stderr}`);
// Example of a silent command that fails (but won't halt because we only suppress output)
// let fail_result = run("echo 'This is silent failure stderr.' >&2; exit 1").silent().do();
// let fail_result = run("echo 'This is silent failure stderr.' >&2; exit 1").silent().execute();
// print(`Failed command finished (silent):`);
// print(`Success: ${fail_result.success}`);
// print(`Exit Code: ${fail_result.code}`);

View File

@@ -0,0 +1,43 @@
# RFS Client Rhai Examples
This folder contains Rhai examples that use the SAL RFS client wrappers registered by `sal::rhai::register(&mut engine)` and executed by the `herodo` binary.
## Quick start
Run the auth + upload + download example (uses hardcoded credentials and `/etc/hosts` as input):
```bash
cargo run -p herodo -- examples/rfsclient/auth_and_upload.rhai
```
By default, the script:
- Uses base URL `http://127.0.0.1:8080`
- Uses credentials `user` / `password`
- Uploads the file `/etc/hosts`
- Downloads to `/tmp/rfs_example_out.txt`
To customize, edit `examples/rfsclient/auth_and_upload.rhai` near the top and change `BASE_URL`, `USER`, `PASS`, and file paths.
## What the example does
- Creates the RFS client: `rfs_create_client(BASE_URL, USER, PASS, TIMEOUT)`
- Health check: `rfs_health_check()`
- Authenticates: `rfs_authenticate()`
- Uploads a file: `rfs_upload_file(local_path, chunk_size, verify)` → returns file hash
- Downloads it back: `rfs_download_file(file_id_or_hash, dest_path, verify)` → returns unit (throws on error)
See `examples/rfsclient/auth_and_upload.rhai` for details.
## Using the Rust client directly (optional)
If you want to use the Rust API (without Rhai), depend on `sal-rfs-client` and see:
- `packages/clients/rfsclient/src/client.rs` (`RfsClient`)
- `packages/clients/rfsclient/src/types.rs` (config and option types)
- `packages/clients/rfsclient/examples/` (example usage)
## Troubleshooting
- Auth failures: verify credentials and that the server requires/authenticates them.
- Connection errors: verify the base URL is reachable from your machine.

View File

@@ -0,0 +1,41 @@
// RFS Client: Auth + Upload + Download example
// Prereqs:
// - RFS server reachable at RFS_BASE_URL
// - Valid credentials in env: RFS_USER, RFS_PASS
// - Run with herodo so the SAL Rhai modules are registered
// NOTE: env_get not available in this runtime; hardcode or replace with your env loader
let BASE_URL = "http://127.0.0.1:8080";
let USER = "user";
let PASS = "password";
let TIMEOUT = 30; // seconds
if BASE_URL == "" { throw "Set BASE_URL in the script"; }
// Create client
let ok = rfs_create_client(BASE_URL, USER, PASS, TIMEOUT);
if !ok { throw "Failed to create RFS client"; }
// Optional health check
let health = rfs_health_check();
print(`RFS health: ${health}`);
// Authenticate (required for some operations)
let auth_ok = rfs_authenticate();
if !auth_ok { throw "Authentication failed"; }
// Upload a local file
// Use an existing readable file to avoid needing os_write_file module
let local_file = "/etc/hosts";
// rfs_upload_file(file_path, chunk_size, verify)
let hash = rfs_upload_file(local_file, 0, false);
print(`Uploaded file hash: ${hash}`);
// Download it back
let out_path = "/tmp/rfs_example_out.txt";
// rfs_download_file(file_id, output_path, verify) returns unit and throws on error
rfs_download_file(hash, out_path, false);
print(`Downloaded to: ${out_path}`);
true

View File

@@ -0,0 +1,116 @@
# Service Manager Examples
This directory contains examples demonstrating the SAL service manager functionality for dynamically launching and managing services across platforms.
## Overview
The service manager provides a unified interface for managing system services:
- **macOS**: Uses `launchctl` for service management
- **Linux**: Uses `zinit` for service management (systemd also available as alternative)
## Examples
### 1. Circle Worker Manager (`circle_worker_manager.rhai`)
**Primary Use Case**: Demonstrates dynamic circle worker management for freezone residents.
This example shows:
- Creating service configurations for circle workers
- Complete service lifecycle management (start, stop, restart, remove)
- Status monitoring and log retrieval
- Error handling and cleanup
```bash
# Run the circle worker management example
herodo examples/service_manager/circle_worker_manager.rhai
```
### 2. Basic Usage (`basic_usage.rhai`)
**Learning Example**: Simple demonstration of the core service manager API.
This example covers:
- Creating and configuring services
- Starting and stopping services
- Checking service status
- Listing managed services
- Retrieving service logs
```bash
# Run the basic usage example
herodo examples/service_manager/basic_usage.rhai
```
## Prerequisites
### Linux (zinit)
Make sure zinit is installed and running:
```bash
# Start zinit with default socket
zinit -s /tmp/zinit.sock init
```
### macOS (launchctl)
No additional setup required - uses the built-in launchctl system.
## Service Manager API
The service manager provides these key functions:
- `create_service_manager()` - Create platform-appropriate service manager
- `start(manager, config)` - Start a new service
- `stop(manager, service_name)` - Stop a running service
- `restart(manager, service_name)` - Restart a service
- `status(manager, service_name)` - Get service status
- `logs(manager, service_name, lines)` - Retrieve service logs
- `list(manager)` - List all managed services
- `remove(manager, service_name)` - Remove a service
- `exists(manager, service_name)` - Check if service exists
- `start_and_confirm(manager, config, timeout)` - Start with confirmation
## Service Configuration
Services are configured using a map with these fields:
```rhai
let config = #{
name: "my-service", // Service name
binary_path: "/usr/bin/my-app", // Executable path
args: ["--config", "/etc/my-app.conf"], // Command arguments
working_directory: "/var/lib/my-app", // Working directory (optional)
environment: #{ // Environment variables
"VAR1": "value1",
"VAR2": "value2"
},
auto_restart: true // Auto-restart on failure
};
```
## Real-World Usage
The circle worker example demonstrates the exact use case requested by the team:
> "We want to be able to launch circle workers dynamically. For instance when someone registers to the freezone, we need to be able to launch a circle worker for the new resident."
The service manager enables:
1. **Dynamic service creation** - Create services on-demand for new residents
2. **Cross-platform support** - Works on both macOS and Linux
3. **Lifecycle management** - Full control over service lifecycle
4. **Monitoring and logging** - Track service status and retrieve logs
5. **Cleanup** - Proper service removal when no longer needed
## Error Handling
All service manager functions can throw errors. Use try-catch blocks for robust error handling:
```rhai
try {
sm::start(manager, config);
print("✅ Service started successfully");
} catch (error) {
print(`❌ Failed to start service: ${error}`);
}
```

View File

@@ -0,0 +1,85 @@
// Basic Service Manager Usage Example
//
// This example demonstrates the basic API of the service manager.
// It works on both macOS (launchctl) and Linux (zinit/systemd).
//
// Prerequisites:
//
// Linux: The service manager will automatically discover running zinit servers
// or fall back to systemd. To use zinit, start it with:
// zinit -s /tmp/zinit.sock init
//
// You can also specify a custom socket path:
// export ZINIT_SOCKET_PATH=/your/custom/path/zinit.sock
//
// macOS: No additional setup required (uses launchctl).
//
// Usage:
// herodo examples/service_manager/basic_usage.rhai
// Service Manager Basic Usage Example
// This example uses the SAL service manager through Rhai integration
print("🚀 Basic Service Manager Usage Example");
print("======================================");
// Create a service manager for the current platform
let manager = create_service_manager();
print("🍎 Using service manager for current platform");
// Create a simple service configuration
let config = #{
name: "example-service",
binary_path: "/bin/echo",
args: ["Hello from service manager!"],
working_directory: "/tmp",
environment: #{
"EXAMPLE_VAR": "hello_world"
},
auto_restart: false
};
print("\n📝 Service Configuration:");
print(` Name: ${config.name}`);
print(` Binary: ${config.binary_path}`);
print(` Args: ${config.args}`);
// Start the service
print("\n🚀 Starting service...");
start(manager, config);
print("✅ Service started successfully");
// Check service status
print("\n📊 Checking service status...");
let status = status(manager, "example-service");
print(`Status: ${status}`);
// List all services
print("\n📋 Listing all managed services...");
let services = list(manager);
print(`Found ${services.len()} services:`);
for service in services {
print(` - ${service}`);
}
// Get service logs
print("\n📄 Getting service logs...");
let logs = logs(manager, "example-service", 5);
if logs.trim() == "" {
print("No logs available");
} else {
print(`Logs:\n${logs}`);
}
// Stop the service
print("\n🛑 Stopping service...");
stop(manager, "example-service");
print("✅ Service stopped");
// Remove the service
print("\n🗑 Removing service...");
remove(manager, "example-service");
print("✅ Service removed");
print("\n🎉 Example completed successfully!");

View File

@@ -0,0 +1,141 @@
// Circle Worker Manager Example
//
// This example demonstrates how to use the service manager to dynamically launch
// circle workers for new freezone residents. This is the primary use case requested
// by the team.
//
// Usage:
//
// On macOS (uses launchctl):
// herodo examples/service_manager/circle_worker_manager.rhai
//
// On Linux (uses zinit - requires zinit to be running):
// First start zinit: zinit -s /tmp/zinit.sock init
// herodo examples/service_manager/circle_worker_manager.rhai
// Circle Worker Manager Example
// This example uses the SAL service manager through Rhai integration
print("🚀 Circle Worker Manager Example");
print("=================================");
// Create the appropriate service manager for the current platform
let service_manager = create_service_manager();
print("✅ Created service manager for current platform");
// Simulate a new freezone resident registration
let resident_id = "resident_12345";
let worker_name = `circle-worker-${resident_id}`;
print(`\n📝 New freezone resident registered: ${resident_id}`);
print(`🔧 Creating circle worker service: ${worker_name}`);
// Create service configuration for the circle worker
let config = #{
name: worker_name,
binary_path: "/bin/sh",
args: [
"-c",
`echo 'Circle worker for ${resident_id} starting...'; sleep 30; echo 'Circle worker for ${resident_id} completed'`
],
working_directory: "/tmp",
environment: #{
"RESIDENT_ID": resident_id,
"WORKER_TYPE": "circle",
"LOG_LEVEL": "info"
},
auto_restart: true
};
print("📋 Service configuration created:");
print(` Name: ${config.name}`);
print(` Binary: ${config.binary_path}`);
print(` Args: ${config.args}`);
print(` Auto-restart: ${config.auto_restart}`);
print(`\n🔄 Demonstrating service lifecycle for: ${worker_name}`);
// 1. Check if service already exists
print("\n1⃣ Checking if service exists...");
if exists(service_manager, worker_name) {
print("⚠️ Service already exists, removing it first...");
remove(service_manager, worker_name);
print("🗑️ Existing service removed");
} else {
print("✅ Service doesn't exist, ready to create");
}
// 2. Start the service
print("\n2⃣ Starting the circle worker service...");
start(service_manager, config);
print("✅ Service started successfully");
// 3. Check service status
print("\n3⃣ Checking service status...");
let status = status(service_manager, worker_name);
print(`📊 Service status: ${status}`);
// 4. List all services to show our service is there
print("\n4⃣ Listing all managed services...");
let services = list(service_manager);
print(`📋 Managed services (${services.len()}):`);
for service in services {
let marker = if service == worker_name { "👉" } else { " " };
print(` ${marker} ${service}`);
}
// 5. Wait a moment and check status again
print("\n5⃣ Waiting 3 seconds and checking status again...");
sleep(3000); // 3 seconds in milliseconds
let status = status(service_manager, worker_name);
print(`📊 Service status after 3s: ${status}`);
// 6. Get service logs
print("\n6⃣ Retrieving service logs...");
let logs = logs(service_manager, worker_name, 10);
if logs.trim() == "" {
print("📄 No logs available yet (this is normal for new services)");
} else {
print("📄 Recent logs:");
let log_lines = logs.split('\n');
for i in 0..5 {
if i < log_lines.len() {
print(` ${log_lines[i]}`);
}
}
}
// 7. Demonstrate start_and_confirm with timeout
print("\n7⃣ Testing start_and_confirm (should succeed quickly since already running)...");
start_and_confirm(service_manager, config, 5);
print("✅ Service confirmed running within timeout");
// 8. Stop the service
print("\n8⃣ Stopping the service...");
stop(service_manager, worker_name);
print("🛑 Service stopped");
// 9. Check status after stopping
print("\n9⃣ Checking status after stop...");
let status = status(service_manager, worker_name);
print(`📊 Service status after stop: ${status}`);
// 10. Restart the service
print("\n🔟 Restarting the service...");
restart(service_manager, worker_name);
print("🔄 Service restarted successfully");
// 11. Final cleanup
print("\n🧹 Cleaning up - removing the service...");
remove(service_manager, worker_name);
print("🗑️ Service removed successfully");
// 12. Verify removal
print("\n✅ Verifying service removal...");
if !exists(service_manager, worker_name) {
print("✅ Service successfully removed");
} else {
print("⚠️ Service still exists after removal");
}
print("\n🎉 Circle worker management demonstration complete!");

View File

@@ -0,0 +1,78 @@
// Basic example of using the Zinit client in Rhai
// Socket path for Zinit
let socket_path = "/tmp/zinit.sock";
// List all services
print("Listing all services:");
let services = zinit_list(socket_path);
if services.is_empty() {
print("No services found.");
} else {
// Iterate over the keys of the map
for name in services.keys() {
let state = services[name];
print(`${name}: ${state}`);
}
}
// Get status of a specific service
let service_name = "test";
print(`Getting status for ${service_name}:`);
try {
let status = zinit_status(socket_path, service_name);
print(`Service: ${status.name}`);
print(`PID: ${status.pid}`);
print(`State: ${status.state}`);
print(`Target: ${status.target}`);
print("Dependencies:");
for (dep, state) in status.after.keys() {
print(` ${dep}: ${state}`);
}
} catch(err) {
print(`Error getting status: ${err}`);
}
// Create a new service
print("\nCreating a new service:");
let new_service = "rhai-test-service";
let exec_command = "echo 'Hello from Rhai'";
let oneshot = true;
try {
let result = zinit_create_service(socket_path, new_service, exec_command, oneshot);
print(`Service created: ${result}`);
// Monitor the service
print("\nMonitoring the service:");
let monitor_result = zinit_monitor(socket_path, new_service);
print(`Service monitored: ${monitor_result}`);
// Start the service
print("\nStarting the service:");
let start_result = zinit_start(socket_path, new_service);
print(`Service started: ${start_result}`);
// Get logs for a specific service
print("\nGetting logs:");
let logs = zinit_logs(socket_path, new_service);
for log in logs {
print(log);
}
// Clean up
print("\nCleaning up:");
let stop_result = zinit_stop(socket_path, new_service);
print(`Service stopped: ${stop_result}`);
let forget_result = zinit_forget(socket_path, new_service);
print(`Service forgotten: ${forget_result}`);
let delete_result = zinit_delete_service(socket_path, new_service);
print(`Service deleted: ${delete_result}`);
} catch(err) {
print(`Error: ${err}`);
}

View File

@@ -0,0 +1,41 @@
// Basic example of using the Zinit client in Rhai
// Socket path for Zinit
let socket_path = "/tmp/zinit.sock";
// Create a new service
print("\nCreating a new service:");
let new_service = "rhai-test-service";
let exec_command = "echo 'Hello from Rhai'";
let oneshot = true;
let result = zinit_create_service(socket_path, new_service, exec_command, oneshot);
print(`Service created: ${result}`);
// Monitor the service
print("\nMonitoring the service:");
let monitor_result = zinit_monitor(socket_path, new_service);
print(`Service monitored: ${monitor_result}`);
// Start the service
print("\nStarting the service:");
let start_result = zinit_start(socket_path, new_service);
print(`Service started: ${start_result}`);
// Get logs for a specific service
print("\nGetting logs:");
let logs = zinit_logs(socket_path, new_service);
for log in logs {
print(log);
}
// Clean up
print("\nCleaning up:");
let stop_result = zinit_stop(socket_path, new_service);
print(`Service stopped: ${stop_result}`);
let forget_result = zinit_forget(socket_path, new_service);
print(`Service forgotten: ${forget_result}`);
let delete_result = zinit_delete_service(socket_path, new_service);
print(`Service deleted: ${delete_result}`);

View File

@@ -0,0 +1,15 @@
[package]
name = "openrouter_example"
version = "0.1.0"
edition = "2021"
[workspace]
[[bin]]
name = "openrouter_example"
path = "openrouter_example.rs"
[dependencies]
codemonkey = { path = "../../packages/ai/codemonkey" }
openai-api-rs = "6.0.8"
tokio = { version = "1.0", features = ["full"] }

View File

@@ -0,0 +1,47 @@
use codemonkey::{create_ai_provider, AIProviderType, CompletionRequestBuilder, Message, MessageRole, Content};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let (mut provider, provider_type) = create_ai_provider(AIProviderType::OpenRouter)?;
let messages = vec![Message {
role: MessageRole::user,
content: Content::Text("Explain the concept of a factory design pattern in Rust.".to_string()),
name: None,
tool_calls: None,
tool_call_id: None,
}];
println!("Sending request to OpenRouter...");
let response = CompletionRequestBuilder::new(
&mut *provider,
"openai/gpt-oss-120b".to_string(), // Model name as specified by the user
messages,
provider_type, // Pass the provider_type
)
.temperature(1.0)
.max_tokens(8192)
.top_p(1.0)
.reasoning_effort("medium")
.stream(false)
.openrouter_options(|builder| {
builder.provider(
codemonkey::OpenRouterProviderOptionsBuilder::new()
.order(vec!["cerebras"])
.build(),
)
})
.completion()
.await?;
for choice in response.choices {
if let Some(content) = choice.message.content {
print!("{}", content);
}
}
println!();
Ok(())
}

13
examples_rust/ai/run.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/bash
set -e
# Change to directory where this script is located
cd "$(dirname "${BASH_SOURCE[0]}")"
source ../../config/myenv.sh
# Build the example
cargo build
# Run the example
cargo run --bin openrouter_example

25
herodo/Cargo.toml Normal file
View File

@@ -0,0 +1,25 @@
[package]
name = "herodo"
version = "0.1.0"
edition = "2021"
authors = ["PlanetFirst <info@incubaid.com>"]
description = "Herodo - A Rhai script executor for SAL (System Abstraction Layer)"
repository = "https://git.threefold.info/herocode/sal"
license = "Apache-2.0"
keywords = ["rhai", "scripting", "automation", "sal", "system"]
categories = ["command-line-utilities", "development-tools"]
[[bin]]
name = "herodo"
path = "src/main.rs"
[dependencies]
# Core dependencies for herodo binary
env_logger = { workspace = true }
rhai = { workspace = true }
# SAL library for Rhai module registration (with all features for herodo)
sal = { path = "..", features = ["all"] }
[dev-dependencies]
tempfile = { workspace = true }

160
herodo/README.md Normal file
View File

@@ -0,0 +1,160 @@
# Herodo - Rhai Script Executor for SAL
**Version: 0.1.0**
Herodo is a command-line utility that executes Rhai scripts with full access to the SAL (System Abstraction Layer) library. It provides a powerful scripting environment for automation and system management tasks.
## Features
- **Single Script Execution**: Execute individual `.rhai` script files
- **Directory Execution**: Execute all `.rhai` scripts in a directory (recursively)
- **Sorted Execution**: Scripts are executed in alphabetical order for predictable behavior
- **SAL Integration**: Full access to all SAL modules and functions
- **Error Handling**: Clear error messages and proper exit codes
- **Logging Support**: Built-in logging with `env_logger`
## Installation
### Build and Install
```bash
git clone https://github.com/PlanetFirst/sal.git
cd sal
./build_herodo.sh
```
This script will:
- Build herodo in debug mode
- Install it to `~/hero/bin/herodo` (non-root) or `/usr/local/bin/herodo` (root)
- Make it available in your PATH
**Note**: If using the non-root installation, make sure `~/hero/bin` is in your PATH:
```bash
export PATH="$HOME/hero/bin:$PATH"
```
### Install from crates.io (Coming Soon)
```bash
# This will be available once herodo is published to crates.io
cargo install herodo
```
**Note**: `herodo` is not yet published to crates.io due to publishing rate limits. It will be available soon.
## Usage
### Execute a Single Script
```bash
herodo path/to/script.rhai
```
### Execute All Scripts in a Directory
```bash
herodo path/to/scripts/
```
When given a directory, herodo will:
1. Recursively find all `.rhai` files
2. Sort them alphabetically
3. Execute them in order
4. Stop on the first error
## Example Scripts
### Basic Script
```rhai
// hello.rhai
println("Hello from Herodo!");
let result = 42 * 2;
println("Result: " + result);
```
### Using SAL Functions
```rhai
// system_info.rhai
println("=== System Information ===");
// Check if a file exists
let config_exists = exist("/etc/hosts");
println("Config file exists: " + config_exists);
// Download a file
download("https://example.com/data.txt", "/tmp/data.txt");
println("File downloaded successfully");
// Execute a system command
let output = run("ls -la /tmp");
println("Directory listing:");
println(output.stdout);
```
### Redis Operations
```rhai
// redis_example.rhai
println("=== Redis Operations ===");
// Set a value
redis_set("app_status", "running");
println("Status set in Redis");
// Get the value
let status = redis_get("app_status");
println("Current status: " + status);
```
## Available SAL Functions
Herodo provides access to all SAL modules through Rhai:
- **File System**: `exist()`, `mkdir()`, `delete()`, `file_size()`
- **Downloads**: `download()`, `download_install()`
- **Process Management**: `run()`, `kill()`, `process_list()`
- **Redis**: `redis_set()`, `redis_get()`, `redis_del()`
- **PostgreSQL**: Database operations and management
- **Network**: HTTP requests, SSH operations, TCP connectivity
- **Virtualization**: Container operations with Buildah and Nerdctl
- **Text Processing**: String manipulation and template rendering
- **And many more...**
## Error Handling
Herodo provides clear error messages and appropriate exit codes:
- **Exit Code 0**: All scripts executed successfully
- **Exit Code 1**: Error occurred (file not found, script error, etc.)
## Logging
Enable detailed logging by setting the `RUST_LOG` environment variable:
```bash
RUST_LOG=debug herodo script.rhai
```
## Testing
Run the test suite:
```bash
cd herodo
cargo test
```
The test suite includes:
- Unit tests for core functionality
- Integration tests with real script execution
- Error handling scenarios
- SAL module integration tests
## Dependencies
- **rhai**: Embedded scripting language
- **env_logger**: Logging implementation
- **sal**: System Abstraction Layer library
## License
Apache-2.0

143
herodo/src/lib.rs Normal file
View File

@@ -0,0 +1,143 @@
//! Herodo - A Rhai script executor for SAL
//!
//! This library loads the Rhai engine, registers all SAL modules,
//! and executes Rhai scripts from a specified directory in sorted order.
use rhai::{Engine, Scope};
use std::error::Error;
use std::fs;
use std::path::{Path, PathBuf};
use std::process;
/// Run the herodo script executor with the given script path
///
/// # Arguments
///
/// * `script_path` - Path to a Rhai script file or directory containing Rhai scripts
///
/// # Returns
///
/// Result indicating success or failure
pub fn run(script_path: &str) -> Result<(), Box<dyn Error>> {
let path = Path::new(script_path);
// Check if the path exists
if !path.exists() {
eprintln!("Error: '{}' does not exist", script_path);
process::exit(1);
}
// Create a new Rhai engine
let mut engine = Engine::new();
// TODO: if we create a scope here we could clean up all the different functionsand types regsitered wit the engine
// We should generalize the way we add things to the scope for each module sepeartely
let mut scope = Scope::new();
// Conditionally add Hetzner client only when env config is present
if let Ok(cfg) = sal::hetzner::config::Config::from_env() {
let hetzner_client = sal::hetzner::api::Client::new(cfg);
scope.push("hetzner", hetzner_client);
}
// This makes it easy to call e.g. `hetzner.get_server()` or `mycelium.get_connected_peers()`
// --> without the need of manually created a client for each one first
// --> could be conditionally compiled to only use those who we need (we only push the things to the scope that we actually need to run the script)
// Register println function for output
engine.register_fn("println", |s: &str| println!("{}", s));
// Register all SAL modules with the engine
sal::rhai::register(&mut engine)?;
// Collect script files to execute
let script_files: Vec<PathBuf> = if path.is_file() {
// Single file
if let Some(extension) = path.extension() {
if extension != "rhai" {
eprintln!("Warning: '{}' does not have a .rhai extension", script_path);
}
}
vec![path.to_path_buf()]
} else if path.is_dir() {
// Directory - collect all .rhai files recursively and sort them
let mut files = Vec::new();
collect_rhai_files(path, &mut files)?;
if files.is_empty() {
eprintln!("No .rhai files found in directory: {}", script_path);
process::exit(1);
}
// Sort files for consistent execution order
files.sort();
files
} else {
eprintln!("Error: '{}' is neither a file nor a directory", script_path);
process::exit(1);
};
println!(
"Found {} Rhai script{} to execute:",
script_files.len(),
if script_files.len() == 1 { "" } else { "s" }
);
// Execute each script in sorted order
for script_file in script_files {
println!("\nExecuting: {}", script_file.display());
// Read the script content
let script = fs::read_to_string(&script_file)?;
// Execute the script
// match engine.eval::<rhai::Dynamic>(&script) {
// Ok(result) => {
// println!("Script executed successfully");
// if !result.is_unit() {
// println!("Result: {}", result);
// }
// }
// Err(err) => {
// eprintln!("Error executing script: {}", err);
// // Exit with error code when a script fails
// process::exit(1);
// }
// }
engine.run_with_scope(&mut scope, &script)?;
}
println!("\nAll scripts executed successfully!");
Ok(())
}
/// Recursively collect all .rhai files from a directory
///
/// # Arguments
///
/// * `dir` - Directory to search
/// * `files` - Vector to collect files into
///
/// # Returns
///
/// Result indicating success or failure
fn collect_rhai_files(dir: &Path, files: &mut Vec<PathBuf>) -> Result<(), Box<dyn Error>> {
for entry in fs::read_dir(dir)? {
let entry = entry?;
let path = entry.path();
if path.is_dir() {
// Recursively search subdirectories
collect_rhai_files(&path, files)?;
} else if path.is_file() {
// Check if it's a .rhai file
if let Some(extension) = path.extension() {
if extension == "rhai" {
files.push(path);
}
}
}
}
Ok(())
}

25
herodo/src/main.rs Normal file
View File

@@ -0,0 +1,25 @@
//! Herodo binary entry point
//!
//! This is the main entry point for the herodo binary.
//! It parses command line arguments and executes Rhai scripts using the SAL library.
use env_logger;
use std::env;
use std::process;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize the logger
env_logger::init();
let args: Vec<String> = env::args().collect();
if args.len() != 2 {
eprintln!("Usage: {} <script_path>", args[0]);
process::exit(1);
}
let script_path = &args[1];
// Call the run function from the herodo library
herodo::run(script_path)
}

View File

@@ -0,0 +1,222 @@
//! Integration tests for herodo script executor
//!
//! These tests verify that herodo can execute Rhai scripts correctly,
//! handle errors appropriately, and integrate with SAL modules.
use std::fs;
use std::path::Path;
use tempfile::TempDir;
/// Test that herodo can execute a simple Rhai script
#[test]
fn test_simple_script_execution() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
let script_path = temp_dir.path().join("test.rhai");
// Create a simple test script
fs::write(
&script_path,
r#"
println("Hello from herodo test!");
let result = 42;
result
"#,
)
.expect("Failed to write test script");
// Execute the script
let result = herodo::run(script_path.to_str().unwrap());
assert!(result.is_ok(), "Script execution should succeed");
}
/// Test that herodo can execute multiple scripts in a directory
#[test]
fn test_directory_script_execution() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
// Create multiple test scripts
fs::write(
temp_dir.path().join("01_first.rhai"),
r#"
println("First script executing");
let first = 1;
"#,
)
.expect("Failed to write first script");
fs::write(
temp_dir.path().join("02_second.rhai"),
r#"
println("Second script executing");
let second = 2;
"#,
)
.expect("Failed to write second script");
fs::write(
temp_dir.path().join("03_third.rhai"),
r#"
println("Third script executing");
let third = 3;
"#,
)
.expect("Failed to write third script");
// Execute all scripts in the directory
let result = herodo::run(temp_dir.path().to_str().unwrap());
assert!(result.is_ok(), "Directory script execution should succeed");
}
/// Test that herodo handles non-existent paths correctly
#[test]
fn test_nonexistent_path_handling() {
// This test verifies error handling but herodo::run calls process::exit
// In a real scenario, we would need to refactor herodo to return errors
// instead of calling process::exit for better testability
// For now, we test that the path validation logic works
let nonexistent_path = "/this/path/does/not/exist";
let path = Path::new(nonexistent_path);
assert!(!path.exists(), "Test path should not exist");
}
/// Test that herodo can execute scripts with SAL module functions
#[test]
fn test_sal_module_integration() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
let script_path = temp_dir.path().join("sal_test.rhai");
// Create a script that uses SAL functions
fs::write(
&script_path,
r#"
println("Testing SAL module integration");
// Test file existence check (should work with temp directory)
let temp_exists = exist(".");
println("Current directory exists: " + temp_exists);
// Test basic text operations
let text = " hello world ";
let trimmed = text.trim();
println("Trimmed text: '" + trimmed + "'");
println("SAL integration test completed");
"#,
)
.expect("Failed to write SAL test script");
// Execute the script
let result = herodo::run(script_path.to_str().unwrap());
assert!(
result.is_ok(),
"SAL integration script should execute successfully"
);
}
/// Test script execution with subdirectories
#[test]
fn test_recursive_directory_execution() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
// Create subdirectory
let sub_dir = temp_dir.path().join("subdir");
fs::create_dir(&sub_dir).expect("Failed to create subdirectory");
// Create scripts in main directory
fs::write(
temp_dir.path().join("main.rhai"),
r#"
println("Main directory script");
"#,
)
.expect("Failed to write main script");
// Create scripts in subdirectory
fs::write(
sub_dir.join("sub.rhai"),
r#"
println("Subdirectory script");
"#,
)
.expect("Failed to write sub script");
// Execute all scripts recursively
let result = herodo::run(temp_dir.path().to_str().unwrap());
assert!(
result.is_ok(),
"Recursive directory execution should succeed"
);
}
/// Test that herodo handles empty directories gracefully
#[test]
fn test_empty_directory_handling() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
// Create an empty subdirectory
let empty_dir = temp_dir.path().join("empty");
fs::create_dir(&empty_dir).expect("Failed to create empty directory");
// This should handle the empty directory case
// Note: herodo::run will call process::exit(1) for empty directories
// In a production refactor, this should return an error instead
let path = empty_dir.to_str().unwrap();
let path_obj = Path::new(path);
assert!(
path_obj.is_dir(),
"Empty directory should exist and be a directory"
);
}
/// Test script with syntax errors
#[test]
fn test_syntax_error_handling() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
let script_path = temp_dir.path().join("syntax_error.rhai");
// Create a script with syntax errors
fs::write(
&script_path,
r#"
println("This script has syntax errors");
let invalid syntax here;
missing_function_call(;
"#,
)
.expect("Failed to write syntax error script");
// Note: herodo::run will call process::exit(1) on script errors
// In a production refactor, this should return an error instead
// For now, we just verify the file exists and can be read
assert!(script_path.exists(), "Syntax error script should exist");
let content = fs::read_to_string(&script_path).expect("Should be able to read script");
assert!(
content.contains("syntax errors"),
"Script should contain expected content"
);
}
/// Test file extension validation
#[test]
fn test_file_extension_validation() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
// Create files with different extensions
let rhai_file = temp_dir.path().join("valid.rhai");
let txt_file = temp_dir.path().join("invalid.txt");
fs::write(&rhai_file, "println(\"Valid rhai file\");").expect("Failed to write rhai file");
fs::write(&txt_file, "This is not a rhai file").expect("Failed to write txt file");
// Verify file extensions
assert_eq!(rhai_file.extension().unwrap(), "rhai");
assert_eq!(txt_file.extension().unwrap(), "txt");
// herodo should execute .rhai files and warn about non-.rhai files
let result = herodo::run(rhai_file.to_str().unwrap());
assert!(
result.is_ok(),
"Valid .rhai file should execute successfully"
);
}

268
herodo/tests/unit_tests.rs Normal file
View File

@@ -0,0 +1,268 @@
//! Unit tests for herodo library functions
//!
//! These tests focus on individual functions and components of the herodo library.
use std::fs;
use tempfile::TempDir;
/// Test the collect_rhai_files function indirectly through directory operations
#[test]
fn test_rhai_file_collection_logic() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
// Create various files
fs::write(temp_dir.path().join("script1.rhai"), "// Script 1")
.expect("Failed to write script1");
fs::write(temp_dir.path().join("script2.rhai"), "// Script 2")
.expect("Failed to write script2");
fs::write(temp_dir.path().join("not_script.txt"), "Not a script")
.expect("Failed to write txt file");
fs::write(temp_dir.path().join("README.md"), "# README").expect("Failed to write README");
// Create subdirectory with more scripts
let sub_dir = temp_dir.path().join("subdir");
fs::create_dir(&sub_dir).expect("Failed to create subdirectory");
fs::write(sub_dir.join("sub_script.rhai"), "// Sub script")
.expect("Failed to write sub script");
// Count .rhai files manually
let mut rhai_count = 0;
for entry in fs::read_dir(temp_dir.path()).expect("Failed to read temp directory") {
let entry = entry.expect("Failed to get directory entry");
let path = entry.path();
if path.is_file() && path.extension().map_or(false, |ext| ext == "rhai") {
rhai_count += 1;
}
}
// Should find 2 .rhai files in the main directory
assert_eq!(
rhai_count, 2,
"Should find exactly 2 .rhai files in main directory"
);
// Verify subdirectory has 1 .rhai file
let mut sub_rhai_count = 0;
for entry in fs::read_dir(&sub_dir).expect("Failed to read subdirectory") {
let entry = entry.expect("Failed to get directory entry");
let path = entry.path();
if path.is_file() && path.extension().map_or(false, |ext| ext == "rhai") {
sub_rhai_count += 1;
}
}
assert_eq!(
sub_rhai_count, 1,
"Should find exactly 1 .rhai file in subdirectory"
);
}
/// Test path validation logic
#[test]
fn test_path_validation() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
let script_path = temp_dir.path().join("test.rhai");
// Create a test script
fs::write(&script_path, "println(\"test\");").expect("Failed to write test script");
// Test file path validation
assert!(script_path.exists(), "Script file should exist");
assert!(script_path.is_file(), "Script path should be a file");
// Test directory path validation
assert!(temp_dir.path().exists(), "Temp directory should exist");
assert!(temp_dir.path().is_dir(), "Temp path should be a directory");
// Test non-existent path
let nonexistent = temp_dir.path().join("nonexistent.rhai");
assert!(!nonexistent.exists(), "Non-existent path should not exist");
}
/// Test file extension checking
#[test]
fn test_file_extension_checking() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
// Create files with different extensions
let rhai_file = temp_dir.path().join("script.rhai");
let txt_file = temp_dir.path().join("document.txt");
let no_ext_file = temp_dir.path().join("no_extension");
fs::write(&rhai_file, "// Rhai script").expect("Failed to write rhai file");
fs::write(&txt_file, "Text document").expect("Failed to write txt file");
fs::write(&no_ext_file, "No extension").expect("Failed to write no extension file");
// Test extension detection
assert_eq!(rhai_file.extension().unwrap(), "rhai");
assert_eq!(txt_file.extension().unwrap(), "txt");
assert!(no_ext_file.extension().is_none());
// Test extension comparison
assert!(rhai_file.extension().map_or(false, |ext| ext == "rhai"));
assert!(!txt_file.extension().map_or(false, |ext| ext == "rhai"));
assert!(!no_ext_file.extension().map_or(false, |ext| ext == "rhai"));
}
/// Test script content reading
#[test]
fn test_script_content_reading() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
let script_path = temp_dir.path().join("content_test.rhai");
let expected_content = r#"
println("Testing content reading");
let value = 42;
value * 2
"#;
fs::write(&script_path, expected_content).expect("Failed to write script content");
// Read the content back
let actual_content = fs::read_to_string(&script_path).expect("Failed to read script content");
assert_eq!(
actual_content, expected_content,
"Script content should match"
);
// Verify content contains expected elements
assert!(
actual_content.contains("println"),
"Content should contain println"
);
assert!(
actual_content.contains("let value = 42"),
"Content should contain variable declaration"
);
assert!(
actual_content.contains("value * 2"),
"Content should contain expression"
);
}
/// Test directory traversal logic
#[test]
fn test_directory_traversal() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
// Create nested directory structure
let level1 = temp_dir.path().join("level1");
let level2 = level1.join("level2");
let level3 = level2.join("level3");
fs::create_dir_all(&level3).expect("Failed to create nested directories");
// Create scripts at different levels
fs::write(temp_dir.path().join("root.rhai"), "// Root script")
.expect("Failed to write root script");
fs::write(level1.join("level1.rhai"), "// Level 1 script")
.expect("Failed to write level1 script");
fs::write(level2.join("level2.rhai"), "// Level 2 script")
.expect("Failed to write level2 script");
fs::write(level3.join("level3.rhai"), "// Level 3 script")
.expect("Failed to write level3 script");
// Verify directory structure
assert!(temp_dir.path().is_dir(), "Root temp directory should exist");
assert!(level1.is_dir(), "Level 1 directory should exist");
assert!(level2.is_dir(), "Level 2 directory should exist");
assert!(level3.is_dir(), "Level 3 directory should exist");
// Verify scripts exist at each level
assert!(
temp_dir.path().join("root.rhai").exists(),
"Root script should exist"
);
assert!(
level1.join("level1.rhai").exists(),
"Level 1 script should exist"
);
assert!(
level2.join("level2.rhai").exists(),
"Level 2 script should exist"
);
assert!(
level3.join("level3.rhai").exists(),
"Level 3 script should exist"
);
}
/// Test sorting behavior for script execution order
#[test]
fn test_script_sorting_order() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
// Create scripts with names that should be sorted
let scripts = vec![
"03_third.rhai",
"01_first.rhai",
"02_second.rhai",
"10_tenth.rhai",
"05_fifth.rhai",
];
for script in &scripts {
fs::write(
temp_dir.path().join(script),
format!("// Script: {}", script),
)
.expect("Failed to write script");
}
// Collect and sort the scripts manually to verify sorting logic
let mut found_scripts = Vec::new();
for entry in fs::read_dir(temp_dir.path()).expect("Failed to read directory") {
let entry = entry.expect("Failed to get directory entry");
let path = entry.path();
if path.is_file() && path.extension().map_or(false, |ext| ext == "rhai") {
found_scripts.push(path.file_name().unwrap().to_string_lossy().to_string());
}
}
found_scripts.sort();
// Verify sorting order
let expected_order = vec![
"01_first.rhai",
"02_second.rhai",
"03_third.rhai",
"05_fifth.rhai",
"10_tenth.rhai",
];
assert_eq!(
found_scripts, expected_order,
"Scripts should be sorted in correct order"
);
}
/// Test empty directory handling
#[test]
fn test_empty_directory_detection() {
let temp_dir = TempDir::new().expect("Failed to create temp directory");
let empty_subdir = temp_dir.path().join("empty");
fs::create_dir(&empty_subdir).expect("Failed to create empty subdirectory");
// Verify directory is empty
let entries: Vec<_> = fs::read_dir(&empty_subdir)
.expect("Failed to read empty directory")
.collect();
assert!(entries.is_empty(), "Directory should be empty");
// Count .rhai files in empty directory
let mut rhai_count = 0;
for entry in fs::read_dir(&empty_subdir).expect("Failed to read empty directory") {
let entry = entry.expect("Failed to get directory entry");
let path = entry.path();
if path.is_file() && path.extension().map_or(false, |ext| ext == "rhai") {
rhai_count += 1;
}
}
assert_eq!(
rhai_count, 0,
"Empty directory should contain no .rhai files"
);
}

47
installers/base.rhai Normal file
View File

@@ -0,0 +1,47 @@
fn mycelium(){
let name="mycelium";
let url="https://github.com/threefoldtech/mycelium/releases/download/v0.6.1/mycelium-x86_64-unknown-linux-musl.tar.gz";
download(url,`/tmp/${name}`,5000);
copy_bin(`/tmp/${name}/*`);
delete(`/tmp/${name}`);
let name="containerd";
}
fn zinit(){
let name="zinit";
let url="https://github.com/threefoldtech/zinit/releases/download/v0.2.25/zinit-linux-x86_64";
download_file(url,`/tmp/${name}`,5000);
screen_kill("zinit");
copy_bin(`/tmp/${name}`);
delete(`/tmp/${name}`);
screen_new("zinit", "zinit init");
sleep(1);
let socket_path = "/tmp/zinit.sock";
// List all services
print("Listing all services:");
let services = zinit_list(socket_path);
if services.is_empty() {
print("No services found.");
} else {
// Iterate over the keys of the map
for name in services.keys() {
let state = services[name];
print(`${name}: ${state}`);
}
}
}
platform_check_linux_x86();
zinit();
// mycelium();
"done"

View File

@@ -0,0 +1,7 @@
platform_check_linux_x86();
exec(`https://git.threefold.info/herocode/sal/raw/branch/main/installers/base.rhai`);
//install all we need for nerdctl
exec(`https://git.threefold.info/herocode/sal/raw/branch/main/installers/nerdctl.rhai`);

54
installers/nerdctl.rhai Normal file
View File

@@ -0,0 +1,54 @@
fn nerdctl_download(){
let name="nerdctl";
let url="https://github.com/containerd/nerdctl/releases/download/v2.1.2/nerdctl-2.1.2-linux-amd64.tar.gz";
download(url,`/tmp/${name}`,10000);
copy_bin(`/tmp/${name}/*`);
delete(`/tmp/${name}`);
screen_kill("containerd");
let name="containerd";
let url="https://github.com/containerd/containerd/releases/download/v2.1.2/containerd-2.1.2-linux-amd64.tar.gz";
download(url,`/tmp/${name}`,20000);
// copy_bin(`/tmp/${name}/bin/*`);
delete(`/tmp/${name}`);
let cfg = `
[[registry]]
location = "localhost:5000"
insecure = true
`;
file_write("/etc/containers/registries.conf", dedent(cfg));
screen_new("containerd", "containerd");
sleep(1);
nerdctl_remove_all();
run("nerdctl run -d -p 5000:5000 --name registry registry:2").log().execute();
package_install("buildah");
package_install("runc");
// let url="https://github.com/threefoldtech/rfs/releases/download/v2.0.6/rfs";
// download_file(url,`/tmp/rfs`,10000);
// chmod_exec("/tmp/rfs");
// mv(`/tmp/rfs`,"/root/hero/bin/");
}
fn ipfs_download(){
let name="ipfs";
let url="https://github.com/ipfs/kubo/releases/download/v0.34.1/kubo_v0.34.1_linux-amd64.tar.gz";
download(url,`/tmp/${name}`,20);
copy_bin(`/tmp/${name}/kubo/ipfs`);
delete(`/tmp/${name}`);
}
platform_check_linux_x86();
nerdctl_download();
// ipfs_download();
"done"

View File

@@ -0,0 +1,10 @@
[package]
name = "codemonkey"
version = "0.1.0"
edition = "2021"
[dependencies]
tokio = { version = "1", features = ["full"] }
async-trait = "0.1.80"
openrouter-rs = "0.4.5"
serde = { version = "1.0", features = ["derive"] }

View File

@@ -0,0 +1,216 @@
use async_trait::async_trait;
use openrouter_rs::{OpenRouterClient, api::chat::{ChatCompletionRequest, Message}, types::completion::CompletionsResponse};
use std::env;
use std::error::Error;
// Re-export MessageRole for easier use in client code
pub use openrouter_rs::types::Role as MessageRole;
#[async_trait]
pub trait AIProvider {
async fn completion(
&mut self,
request: CompletionRequest,
) -> Result<CompletionsResponse, Box<dyn Error>>;
}
pub struct CompletionRequest {
pub model: String,
pub messages: Vec<Message>,
pub temperature: Option<f64>,
pub max_tokens: Option<i64>,
pub top_p: Option<f64>,
pub stream: Option<bool>,
pub stop: Option<Vec<String>>,
}
pub struct CompletionRequestBuilder<'a> {
provider: &'a mut dyn AIProvider,
model: String,
messages: Vec<Message>,
temperature: Option<f64>,
max_tokens: Option<i64>,
top_p: Option<f64>,
stream: Option<bool>,
stop: Option<Vec<String>>,
provider_type: AIProviderType,
}
impl<'a> CompletionRequestBuilder<'a> {
pub fn new(provider: &'a mut dyn AIProvider, model: String, messages: Vec<Message>, provider_type: AIProviderType) -> Self {
Self {
provider,
model,
messages,
temperature: None,
max_tokens: None,
top_p: None,
stream: None,
stop: None,
provider_type,
}
}
pub fn temperature(mut self, temperature: f64) -> Self {
self.temperature = Some(temperature);
self
}
pub fn max_tokens(mut self, max_tokens: i64) -> Self {
self.max_tokens = Some(max_tokens);
self
}
pub fn top_p(mut self, top_p: f64) -> Self {
self.top_p = Some(top_p);
self
}
pub fn stream(mut self, stream: bool) -> Self {
self.stream = Some(stream);
self
}
pub fn stop(mut self, stop: Vec<String>) -> Self {
self.stop = Some(stop);
self
}
pub async fn completion(self) -> Result<CompletionsResponse, Box<dyn Error>> {
let request = CompletionRequest {
model: self.model,
messages: self.messages,
temperature: self.temperature,
max_tokens: self.max_tokens,
top_p: self.top_p,
stream: self.stream,
stop: self.stop,
};
self.provider.completion(request).await
}
}
pub struct GroqAIProvider {
client: OpenRouterClient,
}
#[async_trait]
impl AIProvider for GroqAIProvider {
async fn completion(
&mut self,
request: CompletionRequest,
) -> Result<CompletionsResponse, Box<dyn Error>> {
let chat_request = ChatCompletionRequest::builder()
.model(request.model)
.messages(request.messages)
.temperature(request.temperature.unwrap_or(1.0))
.max_tokens(request.max_tokens.map(|x| x as u32).unwrap_or(2048))
.top_p(request.top_p.unwrap_or(1.0))
.build()?;
let result = self.client.send_chat_completion(&chat_request).await?;
Ok(result)
}
}
pub struct OpenAIProvider {
client: OpenRouterClient,
}
#[async_trait]
impl AIProvider for OpenAIProvider {
async fn completion(
&mut self,
request: CompletionRequest,
) -> Result<CompletionsResponse, Box<dyn Error>> {
let chat_request = ChatCompletionRequest::builder()
.model(request.model)
.messages(request.messages)
.temperature(request.temperature.unwrap_or(1.0))
.max_tokens(request.max_tokens.map(|x| x as u32).unwrap_or(2048))
.top_p(request.top_p.unwrap_or(1.0))
.build()?;
let result = self.client.send_chat_completion(&chat_request).await?;
Ok(result)
}
}
pub struct OpenRouterAIProvider {
client: OpenRouterClient,
}
#[async_trait]
impl AIProvider for OpenRouterAIProvider {
async fn completion(
&mut self,
request: CompletionRequest,
) -> Result<CompletionsResponse, Box<dyn Error>> {
let chat_request = ChatCompletionRequest::builder()
.model(request.model)
.messages(request.messages)
.temperature(request.temperature.unwrap_or(1.0))
.max_tokens(request.max_tokens.map(|x| x as u32).unwrap_or(2048))
.top_p(request.top_p.unwrap_or(1.0))
.build()?;
let result = self.client.send_chat_completion(&chat_request).await?;
Ok(result)
}
}
pub struct CerebrasAIProvider {
client: OpenRouterClient,
}
#[async_trait]
impl AIProvider for CerebrasAIProvider {
async fn completion(
&mut self,
request: CompletionRequest,
) -> Result<CompletionsResponse, Box<dyn Error>> {
let chat_request = ChatCompletionRequest::builder()
.model(request.model)
.messages(request.messages)
.temperature(request.temperature.unwrap_or(1.0))
.max_tokens(request.max_tokens.map(|x| x as u32).unwrap_or(2048))
.top_p(request.top_p.unwrap_or(1.0))
.build()?;
let result = self.client.send_chat_completion(&chat_request).await?;
Ok(result)
}
}
#[derive(PartialEq)]
pub enum AIProviderType {
Groq,
OpenAI,
OpenRouter,
Cerebras,
}
pub fn create_ai_provider(provider_type: AIProviderType) -> Result<(Box<dyn AIProvider>, AIProviderType), Box<dyn Error>> {
match provider_type {
AIProviderType::Groq => {
let api_key = env::var("GROQ_API_KEY")?;
let client = OpenRouterClient::builder().api_key(api_key).build()?;
Ok((Box::new(GroqAIProvider { client }), AIProviderType::Groq))
}
AIProviderType::OpenAI => {
let api_key = env::var("OPENAI_API_KEY")?;
let client = OpenRouterClient::builder().api_key(api_key).build()?;
Ok((Box::new(OpenAIProvider { client }), AIProviderType::OpenAI))
}
AIProviderType::OpenRouter => {
let api_key = env::var("OPENROUTER_API_KEY")?;
let client = OpenRouterClient::builder().api_key(api_key).build()?;
Ok((Box::new(OpenRouterAIProvider { client }), AIProviderType::OpenRouter))
}
AIProviderType::Cerebras => {
let api_key = env::var("CEREBRAS_API_KEY")?;
let client = OpenRouterClient::builder().api_key(api_key).build()?;
Ok((Box::new(CerebrasAIProvider { client }), AIProviderType::Cerebras))
}
}
}

View File

@@ -0,0 +1,12 @@
[package]
name = "sal-hetzner"
version = "0.1.0"
edition = "2024"
[dependencies]
prettytable = "0.10.0"
reqwest.workspace = true
rhai = { workspace = true, features = ["serde"] }
serde = { workspace = true, features = ["derive"] }
serde_json.workspace = true
thiserror.workspace = true

View File

@@ -0,0 +1,54 @@
use std::fmt;
use serde::Deserialize;
use thiserror::Error;
#[derive(Debug, Error)]
pub enum AppError {
#[error("Request failed: {0}")]
RequestError(#[from] reqwest::Error),
#[error("API error: {0}")]
ApiError(ApiError),
#[error("Deserialization Error: {0:?}")]
SerdeJsonError(#[from] serde_json::Error),
}
#[derive(Debug, Deserialize)]
pub struct ApiError {
pub status: u16,
pub message: String,
}
impl From<reqwest::blocking::Response> for ApiError {
fn from(value: reqwest::blocking::Response) -> Self {
ApiError {
status: value.status().into(),
message: value.text().unwrap_or("The API call returned an error.".to_string()),
}
}
}
impl fmt::Display for ApiError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
#[derive(Deserialize)]
struct HetznerApiError {
code: String,
message: String,
}
#[derive(Deserialize)]
struct HetznerApiErrorWrapper {
error: HetznerApiError,
}
if let Ok(wrapper) = serde_json::from_str::<HetznerApiErrorWrapper>(&self.message) {
write!(
f,
"Status: {}, Code: {}, Message: {}",
self.status, wrapper.error.code, wrapper.error.message
)
} else {
write!(f, "Status: {}: {}", self.status, self.message)
}
}
}

View File

@@ -0,0 +1,513 @@
pub mod error;
pub mod models;
use self::models::{
Boot, Rescue, Server, SshKey, ServerAddonProduct, ServerAddonProductWrapper,
AuctionServerProduct, AuctionServerProductWrapper, AuctionTransaction,
AuctionTransactionWrapper, BootWrapper, Cancellation, CancellationWrapper,
OrderServerBuilder, OrderServerProduct, OrderServerProductWrapper, RescueWrapped,
ServerWrapper, SshKeyWrapper, Transaction, TransactionWrapper,
ServerAddonTransaction, ServerAddonTransactionWrapper,
OrderServerAddonBuilder,
};
use crate::api::error::ApiError;
use crate::config::Config;
use error::AppError;
use reqwest::blocking::Client as HttpClient;
use serde_json::json;
#[derive(Clone)]
pub struct Client {
http_client: HttpClient,
config: Config,
}
impl Client {
pub fn new(config: Config) -> Self {
Self {
http_client: HttpClient::new(),
config,
}
}
fn handle_response<T>(&self, response: reqwest::blocking::Response) -> Result<T, AppError>
where
T: serde::de::DeserializeOwned,
{
let status = response.status();
let body = response.text()?;
if status.is_success() {
serde_json::from_str::<T>(&body).map_err(Into::into)
} else {
Err(AppError::ApiError(ApiError {
status: status.as_u16(),
message: body,
}))
}
}
pub fn get_server(&self, server_number: i32) -> Result<Server, AppError> {
let response = self
.http_client
.get(format!("{}/server/{}", self.config.api_url, server_number))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: ServerWrapper = self.handle_response(response)?;
Ok(wrapped.server)
}
pub fn get_servers(&self) -> Result<Vec<Server>, AppError> {
let response = self
.http_client
.get(format!("{}/server", self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<ServerWrapper> = self.handle_response(response)?;
let servers = wrapped.into_iter().map(|sw| sw.server).collect();
Ok(servers)
}
pub fn update_server_name(&self, server_number: i32, name: &str) -> Result<Server, AppError> {
let params = [("server_name", name)];
let response = self
.http_client
.post(format!("{}/server/{}", self.config.api_url, server_number))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: ServerWrapper = self.handle_response(response)?;
Ok(wrapped.server)
}
pub fn get_cancellation_data(&self, server_number: i32) -> Result<Cancellation, AppError> {
let response = self
.http_client
.get(format!(
"{}/server/{}/cancellation",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: CancellationWrapper = self.handle_response(response)?;
Ok(wrapped.cancellation)
}
pub fn cancel_server(
&self,
server_number: i32,
cancellation_date: &str,
) -> Result<Cancellation, AppError> {
let params = [("cancellation_date", cancellation_date)];
let response = self
.http_client
.post(format!(
"{}/server/{}/cancellation",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: CancellationWrapper = self.handle_response(response)?;
Ok(wrapped.cancellation)
}
pub fn withdraw_cancellation(&self, server_number: i32) -> Result<(), AppError> {
self.http_client
.delete(format!(
"{}/server/{}/cancellation",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
Ok(())
}
pub fn get_ssh_keys(&self) -> Result<Vec<SshKey>, AppError> {
let response = self
.http_client
.get(format!("{}/key", self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<SshKeyWrapper> = self.handle_response(response)?;
let keys = wrapped.into_iter().map(|sk| sk.key).collect();
Ok(keys)
}
pub fn get_ssh_key(&self, fingerprint: &str) -> Result<SshKey, AppError> {
let response = self
.http_client
.get(format!("{}/key/{}", self.config.api_url, fingerprint))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: SshKeyWrapper = self.handle_response(response)?;
Ok(wrapped.key)
}
pub fn add_ssh_key(&self, name: &str, data: &str) -> Result<SshKey, AppError> {
let params = [("name", name), ("data", data)];
let response = self
.http_client
.post(format!("{}/key", self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: SshKeyWrapper = self.handle_response(response)?;
Ok(wrapped.key)
}
pub fn update_ssh_key_name(&self, fingerprint: &str, name: &str) -> Result<SshKey, AppError> {
let params = [("name", name)];
let response = self
.http_client
.post(format!("{}/key/{}", self.config.api_url, fingerprint))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: SshKeyWrapper = self.handle_response(response)?;
Ok(wrapped.key)
}
pub fn delete_ssh_key(&self, fingerprint: &str) -> Result<(), AppError> {
self.http_client
.delete(format!("{}/key/{}", self.config.api_url, fingerprint))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
Ok(())
}
pub fn get_boot_configuration(&self, server_number: i32) -> Result<Boot, AppError> {
let response = self
.http_client
.get(format!("{}/boot/{}", self.config.api_url, server_number))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: BootWrapper = self.handle_response(response)?;
Ok(wrapped.boot)
}
pub fn get_rescue_boot_configuration(&self, server_number: i32) -> Result<Rescue, AppError> {
let response = self
.http_client
.get(format!(
"{}/boot/{}/rescue",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: RescueWrapped = self.handle_response(response)?;
Ok(wrapped.rescue)
}
pub fn enable_rescue_mode(
&self,
server_number: i32,
os: &str,
authorized_keys: Option<&[String]>,
) -> Result<Rescue, AppError> {
let mut params = vec![("os", os)];
if let Some(keys) = authorized_keys {
for key in keys {
params.push(("authorized_key[]", key));
}
}
let response = self
.http_client
.post(format!(
"{}/boot/{}/rescue",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: RescueWrapped = self.handle_response(response)?;
Ok(wrapped.rescue)
}
pub fn disable_rescue_mode(&self, server_number: i32) -> Result<Rescue, AppError> {
let response = self
.http_client
.delete(format!(
"{}/boot/{}/rescue",
self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: RescueWrapped = self.handle_response(response)?;
Ok(wrapped.rescue)
}
pub fn get_server_products(
&self,
) -> Result<Vec<OrderServerProduct>, AppError> {
let response = self
.http_client
.get(format!("{}/order/server/product", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<OrderServerProductWrapper> = self.handle_response(response)?;
let products = wrapped.into_iter().map(|sop| sop.product).collect();
Ok(products)
}
pub fn get_server_product_by_id(
&self,
product_id: &str,
) -> Result<OrderServerProduct, AppError> {
let response = self
.http_client
.get(format!(
"{}/order/server/product/{}",
&self.config.api_url, product_id
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: OrderServerProductWrapper = self.handle_response(response)?;
Ok(wrapped.product)
}
pub fn order_server(&self, order: OrderServerBuilder) -> Result<Transaction, AppError> {
let mut params = json!({
"product_id": order.product_id,
"dist": order.dist,
"location": order.location,
"authorized_key": order.authorized_keys.unwrap_or_default(),
});
if let Some(addons) = order.addons {
params["addon"] = json!(addons);
}
if let Some(test) = order.test {
if test {
params["test"] = json!(test);
}
}
let response = self
.http_client
.post(format!("{}/order/server/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.json(&params)
.send()?;
let wrapped: TransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
pub fn get_transaction_by_id(&self, transaction_id: &str) -> Result<Transaction, AppError> {
let response = self
.http_client
.get(format!(
"{}/order/server/transaction/{}",
&self.config.api_url, transaction_id
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: TransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
pub fn get_transactions(&self) -> Result<Vec<Transaction>, AppError> {
let response = self
.http_client
.get(format!("{}/order/server/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<TransactionWrapper> = self.handle_response(response)?;
let transactions = wrapped.into_iter().map(|t| t.transaction).collect();
Ok(transactions)
}
pub fn get_auction_server_products(&self) -> Result<Vec<AuctionServerProduct>, AppError> {
let response = self
.http_client
.get(format!(
"{}/order/server_market/product",
&self.config.api_url
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<AuctionServerProductWrapper> = self.handle_response(response)?;
let products = wrapped.into_iter().map(|asp| asp.product).collect();
Ok(products)
}
pub fn get_auction_server_product_by_id(&self, product_id: &str) -> Result<AuctionServerProduct, AppError> {
let response = self
.http_client
.get(format!("{}/order/server_market/product/{}", &self.config.api_url, product_id))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: AuctionServerProductWrapper = self.handle_response(response)?;
Ok(wrapped.product)
}
pub fn get_auction_transactions(&self) -> Result<Vec<AuctionTransaction>, AppError> {
let response = self
.http_client
.get(format!("{}/order/server_market/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<AuctionTransactionWrapper> = self.handle_response(response)?;
let transactions = wrapped.into_iter().map(|t| t.transaction).collect();
Ok(transactions)
}
pub fn get_auction_transaction_by_id(&self, transaction_id: &str) -> Result<AuctionTransaction, AppError> {
let response = self
.http_client
.get(format!("{}/order/server_market/transaction/{}", &self.config.api_url, transaction_id))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: AuctionTransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
pub fn get_server_addon_products(
&self,
server_number: i64,
) -> Result<Vec<ServerAddonProduct>, AppError> {
let response = self
.http_client
.get(format!(
"{}/order/server_addon/{}/product",
&self.config.api_url, server_number
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<ServerAddonProductWrapper> = self.handle_response(response)?;
let products = wrapped.into_iter().map(|sap| sap.product).collect();
Ok(products)
}
pub fn order_auction_server(
&self,
product_id: i64,
authorized_keys: Vec<String>,
dist: Option<String>,
arch: Option<String>,
lang: Option<String>,
comment: Option<String>,
addons: Option<Vec<String>>,
test: Option<bool>,
) -> Result<AuctionTransaction, AppError> {
let mut params: Vec<(&str, String)> = Vec::new();
params.push(("product_id", product_id.to_string()));
for key in &authorized_keys {
params.push(("authorized_key[]", key.clone()));
}
if let Some(dist) = dist {
params.push(("dist", dist));
}
if let Some(arch) = arch {
params.push(("@deprecated arch", arch));
}
if let Some(lang) = lang {
params.push(("lang", lang));
}
if let Some(comment) = comment {
params.push(("comment", comment));
}
if let Some(addons) = addons {
for addon in addons {
params.push(("addon[]", addon));
}
}
if let Some(test) = test {
params.push(("test", test.to_string()));
}
let response = self
.http_client
.post(format!("{}/order/server_market/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: AuctionTransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
pub fn get_server_addon_transactions(&self) -> Result<Vec<ServerAddonTransaction>, AppError> {
let response = self
.http_client
.get(format!("{}/order/server_addon/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: Vec<ServerAddonTransactionWrapper> = self.handle_response(response)?;
let transactions = wrapped.into_iter().map(|satw| satw.transaction).collect();
Ok(transactions)
}
pub fn get_server_addon_transaction_by_id(
&self,
transaction_id: &str,
) -> Result<ServerAddonTransaction, AppError> {
let response = self
.http_client
.get(format!(
"{}/order/server_addon/transaction/{}",
&self.config.api_url, transaction_id
))
.basic_auth(&self.config.username, Some(&self.config.password))
.send()?;
let wrapped: ServerAddonTransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
pub fn order_server_addon(
&self,
order: OrderServerAddonBuilder,
) -> Result<ServerAddonTransaction, AppError> {
let mut params = json!({
"server_number": order.server_number,
"product_id": order.product_id,
});
if let Some(reason) = order.reason {
params["reason"] = json!(reason);
}
if let Some(gateway) = order.gateway {
params["gateway"] = json!(gateway);
}
if let Some(test) = order.test {
if test {
params["test"] = json!(test);
}
}
let response = self
.http_client
.post(format!("{}/order/server_addon/transaction", &self.config.api_url))
.basic_auth(&self.config.username, Some(&self.config.password))
.form(&params)
.send()?;
let wrapped: ServerAddonTransactionWrapper = self.handle_response(response)?;
Ok(wrapped.transaction)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,25 @@
use std::env;
#[derive(Clone)]
pub struct Config {
pub username: String,
pub password: String,
pub api_url: String,
}
impl Config {
pub fn from_env() -> Result<Self, String> {
let username = env::var("HETZNER_USERNAME")
.map_err(|_| "HETZNER_USERNAME environment variable not set".to_string())?;
let password = env::var("HETZNER_PASSWORD")
.map_err(|_| "HETZNER_PASSWORD environment variable not set".to_string())?;
let api_url = env::var("HETZNER_API_URL")
.unwrap_or_else(|_| "https://robot-ws.your-server.de".to_string());
Ok(Config {
username,
password,
api_url,
})
}
}

Some files were not shown because too many files have changed in this diff Show More