update coordinator and add end to end tests
This commit is contained in:
170
tests/README.md
Normal file
170
tests/README.md
Normal file
@@ -0,0 +1,170 @@
|
||||
# End-to-End Integration Tests
|
||||
|
||||
This directory contains end-to-end integration tests for the Horus system components. Each test file spawns the actual binary and tests it via its client library.
|
||||
|
||||
## Test Files
|
||||
|
||||
### `coordinator.rs`
|
||||
End-to-end tests for the Hero Coordinator service.
|
||||
|
||||
**Tests:**
|
||||
- Actor creation and loading
|
||||
- Context creation and management
|
||||
- Runner registration and configuration
|
||||
- Job creation with dependencies
|
||||
- Flow creation and DAG generation
|
||||
- Flow execution (start)
|
||||
|
||||
**Prerequisites:**
|
||||
- Redis server running on `127.0.0.1:6379`
|
||||
- Port `9652` (HTTP API) and `9653` (WebSocket API) available
|
||||
|
||||
**Run:**
|
||||
```bash
|
||||
cargo test --test coordinator -- --test-threads=1
|
||||
```
|
||||
|
||||
### `supervisor.rs`
|
||||
End-to-end tests for the Hero Supervisor service.
|
||||
|
||||
**Tests:**
|
||||
- OpenRPC discovery
|
||||
- Runner registration and management
|
||||
- Job creation and execution
|
||||
- Job status tracking
|
||||
- API key generation and management
|
||||
- Authentication verification
|
||||
- Complete workflow integration
|
||||
|
||||
**Prerequisites:**
|
||||
- Redis server running on `127.0.0.1:6379`
|
||||
- Port `3031` available
|
||||
|
||||
**Run:**
|
||||
```bash
|
||||
cargo test --test coordinator -- --test-threads=1
|
||||
```
|
||||
|
||||
### `runner_hero.rs`
|
||||
End-to-end tests for the Hero (Python) runner.
|
||||
|
||||
**Prerequisites:**
|
||||
- Python 3 installed
|
||||
- Redis server running
|
||||
|
||||
**Run:**
|
||||
```bash
|
||||
cargo test --test runner_hero -- --test-threads=1
|
||||
```
|
||||
|
||||
### `runner_osiris.rs`
|
||||
End-to-end tests for the Osiris (V language) runner.
|
||||
|
||||
**Prerequisites:**
|
||||
- V language compiler installed
|
||||
- Redis server running
|
||||
|
||||
**Run:**
|
||||
```bash
|
||||
cargo test --test runner_osiris -- --test-threads=1
|
||||
```
|
||||
|
||||
### `runner_sal.rs`
|
||||
End-to-end tests for the Sal (Rhai scripting) runner.
|
||||
|
||||
**Prerequisites:**
|
||||
- Redis server running
|
||||
|
||||
**Run:**
|
||||
```bash
|
||||
cargo test --test runner_sal -- --test-threads=1
|
||||
```
|
||||
|
||||
## Running All Tests
|
||||
|
||||
To run all end-to-end tests sequentially:
|
||||
|
||||
```bash
|
||||
cargo test --tests -- --test-threads=1
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
### Sequential Execution Required
|
||||
|
||||
All tests **must** be run with `--test-threads=1` because:
|
||||
1. Each test spawns a server process that binds to specific ports
|
||||
2. Tests share Redis databases and may conflict if run in parallel
|
||||
3. Process cleanup needs to happen sequentially
|
||||
|
||||
### Redis Requirement
|
||||
|
||||
All tests require a Redis server running on `127.0.0.1:6379`. You can start Redis with:
|
||||
|
||||
```bash
|
||||
redis-server
|
||||
```
|
||||
|
||||
Or using Docker:
|
||||
|
||||
```bash
|
||||
docker run -d -p 6379:6379 redis:latest
|
||||
```
|
||||
|
||||
### Port Conflicts
|
||||
|
||||
If tests fail to start, check that the required ports are not in use:
|
||||
|
||||
- **Coordinator**: 9652 (HTTP), 9653 (WebSocket)
|
||||
- **Supervisor**: 3031
|
||||
- **Runners**: Various ports depending on configuration
|
||||
|
||||
You can check port usage with:
|
||||
|
||||
```bash
|
||||
lsof -i :9652
|
||||
lsof -i :3031
|
||||
```
|
||||
|
||||
### Test Isolation
|
||||
|
||||
Each test file:
|
||||
1. Builds the binary using `escargot`
|
||||
2. Starts the process with test-specific configuration
|
||||
3. Runs tests against the running instance
|
||||
4. Cleans up the process at the end
|
||||
|
||||
Tests within a file may share state through Redis, so they are designed to be idempotent and handle existing data.
|
||||
|
||||
### Debugging
|
||||
|
||||
To see detailed logs during test execution:
|
||||
|
||||
```bash
|
||||
RUST_LOG=debug cargo test --test coordinator -- --test-threads=1 --nocapture
|
||||
```
|
||||
|
||||
To run a specific test:
|
||||
|
||||
```bash
|
||||
cargo test --test coordinator test_01_actor_create -- --test-threads=1 --nocapture
|
||||
```
|
||||
|
||||
## Test Architecture
|
||||
|
||||
Each test file follows this pattern:
|
||||
|
||||
1. **Global Process Management**: Uses `lazy_static` and `Once` to ensure the server process starts only once
|
||||
2. **Setup Helper**: Common setup code (e.g., `setup_prerequisites()`) to reduce duplication
|
||||
3. **Sequential Tests**: Tests are numbered (e.g., `test_01_`, `test_02_`) to indicate execution order
|
||||
4. **Cleanup Test**: A final `test_zz_cleanup()` ensures the process is terminated and ports are freed
|
||||
|
||||
## Contributing
|
||||
|
||||
When adding new tests:
|
||||
|
||||
1. Follow the existing naming convention (`test_NN_description`)
|
||||
2. Use the setup helpers to avoid duplication
|
||||
3. Make tests idempotent (handle existing data gracefully)
|
||||
4. Add cleanup in the `test_zz_cleanup()` function
|
||||
5. Update this README with any new prerequisites or test descriptions
|
||||
Reference in New Issue
Block a user