# Rhai Worker The `rhai_worker` crate implements a standalone worker service that listens for Rhai script execution tasks from a Redis queue, executes them, and posts results back to Redis. It is designed to be spawned as a separate OS process by an orchestrator like the `launcher` crate. ## Features - **Redis Queue Consumption**: Listens to a specific Redis list (acting as a task queue) for incoming task IDs. The queue is determined by the `--circle-public-key` argument. - **Rhai Script Execution**: Executes Rhai scripts retrieved from Redis based on task IDs. - **Task State Management**: Updates task status (`processing`, `completed`, `error`) and stores results in Redis hashes. - **Script Scope Injection**: Automatically injects two important constants into the Rhai script's scope: - `CONTEXT_ID`: The public key of the worker's own circle. - `CALLER_ID`: The public key of the entity that requested the script execution. - **Asynchronous Operations**: Built with `tokio` for non-blocking Redis communication. - **Graceful Error Handling**: Captures errors during script execution and stores them for the client. ## Core Components - **`worker_lib` (Library Crate)**: - **`Args`**: A struct (using `clap`) for parsing command-line arguments: `--redis-url` and `--circle-public-key`. - **`run_worker_loop(engine: Engine, args: Args)`**: The main asynchronous function that: - Connects to Redis. - Continuously polls the designated Redis queue (`rhai_tasks:`) using `BLPOP`. - Upon receiving a `task_id`, it fetches the task details from a Redis hash. - It injects `CALLER_ID` and `CONTEXT_ID` into the script's scope. - It executes the script and updates the task status in Redis with the output or error. - **`worker` (Binary Crate - `cmd/worker.rs`)**: - The main executable entry point. It parses command-line arguments, initializes a Rhai engine, and invokes `run_worker_loop`. ## How It Works 1. The worker executable is launched by an external process (e.g., `launcher`), which passes the required command-line arguments. ```bash # This is typically done programmatically by a parent process. /path/to/worker --redis-url redis://127.0.0.1/ --circle-public-key 02...abc ``` 2. The `run_worker_loop` connects to Redis and starts listening to its designated task queue (e.g., `rhai_tasks:02...abc`). 3. A `rhai_dispatcher` submits a task by pushing a `task_id` to this queue and storing the script and other details in a Redis hash. 4. The worker's `BLPOP` command picks up the `task_id`. 5. The worker retrieves the script from the corresponding `rhai_task_details:` hash. 6. It updates the task's status to "processing". 7. The Rhai script is executed within a scope that contains both `CONTEXT_ID` and `CALLER_ID`. 8. After execution, the status is updated to "completed" (with output) or "error" (with an error message). 9. The worker then goes back to listening for the next task. ## Prerequisites - A running Redis instance accessible by the worker. - An orchestrator process (like `launcher`) to spawn the worker. - A `rhai_dispatcher` (or another system) to populate the Redis queues. ## Building and Running The worker is intended to be built as a dependency and run by another program. 1. **Build the worker:** ```bash # From the root of the rhailib project cargo build --package worker ``` The binary will be located at `target/debug/worker`. 2. **Running the worker:** The worker is not typically run manually. The `launcher` crate is responsible for spawning it with the correct arguments. If you need to run it manually for testing, you must provide the required arguments: ```bash ./target/debug/worker --redis-url redis://127.0.0.1/ --circle-public-key ``` ## Dependencies Key dependencies include: - `redis`: For asynchronous Redis communication. - `rhai`: The Rhai script engine. - `clap`: For command-line argument parsing. - `tokio`: For the asynchronous runtime. - `log`, `env_logger`: For logging.