initial commit
This commit is contained in:
BIN
core/worker/.DS_Store
vendored
Normal file
BIN
core/worker/.DS_Store
vendored
Normal file
Binary file not shown.
2
core/worker/.gitignore
vendored
Normal file
2
core/worker/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
/target
|
||||
worker_rhai_temp_db
|
1423
core/worker/Cargo.lock
generated
Normal file
1423
core/worker/Cargo.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
29
core/worker/Cargo.toml
Normal file
29
core/worker/Cargo.toml
Normal file
@@ -0,0 +1,29 @@
|
||||
[package]
|
||||
name = "rhailib_worker"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
|
||||
[lib]
|
||||
name = "rhailib_worker" # Can be different from package name, or same
|
||||
path = "src/lib.rs"
|
||||
|
||||
[[bin]]
|
||||
name = "worker"
|
||||
path = "cmd/worker.rs"
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
redis = { version = "0.25.0", features = ["tokio-comp"] }
|
||||
rhai = { version = "1.18.0", default-features = false, features = ["sync", "decimal", "std"] } # Added "decimal" for broader script support
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
serde_json = "1.0"
|
||||
tokio = { version = "1", features = ["macros", "rt-multi-thread", "time"] }
|
||||
log = "0.4"
|
||||
env_logger = "0.10"
|
||||
clap = { version = "4.4", features = ["derive"] }
|
||||
uuid = { version = "1.6", features = ["v4", "serde"] } # Though task_id is string, uuid might be useful
|
||||
chrono = { version = "0.4", features = ["serde"] }
|
||||
rhai_dispatcher = { path = "../../../rhailib/src/dispatcher" }
|
||||
rhailib_engine = { path = "../engine" }
|
||||
heromodels = { path = "../../../db/heromodels", features = ["rhai"] }
|
75
core/worker/README.md
Normal file
75
core/worker/README.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# Rhai Worker
|
||||
|
||||
The `rhai_worker` crate implements a standalone worker service that listens for Rhai script execution tasks from a Redis queue, executes them, and posts results back to Redis. It is designed to be spawned as a separate OS process by an orchestrator like the `launcher` crate.
|
||||
|
||||
## Features
|
||||
|
||||
- **Redis Queue Consumption**: Listens to a specific Redis list (acting as a task queue) for incoming task IDs. The queue is determined by the `--circle-public-key` argument.
|
||||
- **Rhai Script Execution**: Executes Rhai scripts retrieved from Redis based on task IDs.
|
||||
- **Task State Management**: Updates task status (`processing`, `completed`, `error`) and stores results in Redis hashes.
|
||||
- **Script Scope Injection**: Automatically injects two important constants into the Rhai script's scope:
|
||||
- `CONTEXT_ID`: The public key of the worker's own circle.
|
||||
- `CALLER_ID`: The public key of the entity that requested the script execution.
|
||||
- **Asynchronous Operations**: Built with `tokio` for non-blocking Redis communication.
|
||||
- **Graceful Error Handling**: Captures errors during script execution and stores them for the client.
|
||||
|
||||
## Core Components
|
||||
|
||||
- **`worker_lib` (Library Crate)**:
|
||||
- **`Args`**: A struct (using `clap`) for parsing command-line arguments: `--redis-url` and `--circle-public-key`.
|
||||
- **`run_worker_loop(engine: Engine, args: Args)`**: The main asynchronous function that:
|
||||
- Connects to Redis.
|
||||
- Continuously polls the designated Redis queue (`rhai_tasks:<circle_public_key>`) using `BLPOP`.
|
||||
- Upon receiving a `task_id`, it fetches the task details from a Redis hash.
|
||||
- It injects `CALLER_ID` and `CONTEXT_ID` into the script's scope.
|
||||
- It executes the script and updates the task status in Redis with the output or error.
|
||||
- **`worker` (Binary Crate - `cmd/worker.rs`)**:
|
||||
- The main executable entry point. It parses command-line arguments, initializes a Rhai engine, and invokes `run_worker_loop`.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. The worker executable is launched by an external process (e.g., `launcher`), which passes the required command-line arguments.
|
||||
```bash
|
||||
# This is typically done programmatically by a parent process.
|
||||
/path/to/worker --redis-url redis://127.0.0.1/ --circle-public-key 02...abc
|
||||
```
|
||||
2. The `run_worker_loop` connects to Redis and starts listening to its designated task queue (e.g., `rhai_tasks:02...abc`).
|
||||
3. A `rhai_dispatcher` submits a task by pushing a `task_id` to this queue and storing the script and other details in a Redis hash.
|
||||
4. The worker's `BLPOP` command picks up the `task_id`.
|
||||
5. The worker retrieves the script from the corresponding `rhai_task_details:<task_id>` hash.
|
||||
6. It updates the task's status to "processing".
|
||||
7. The Rhai script is executed within a scope that contains both `CONTEXT_ID` and `CALLER_ID`.
|
||||
8. After execution, the status is updated to "completed" (with output) or "error" (with an error message).
|
||||
9. The worker then goes back to listening for the next task.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A running Redis instance accessible by the worker.
|
||||
- An orchestrator process (like `launcher`) to spawn the worker.
|
||||
- A `rhai_dispatcher` (or another system) to populate the Redis queues.
|
||||
|
||||
## Building and Running
|
||||
|
||||
The worker is intended to be built as a dependency and run by another program.
|
||||
|
||||
1. **Build the worker:**
|
||||
```bash
|
||||
# From the root of the rhailib project
|
||||
cargo build --package worker
|
||||
```
|
||||
The binary will be located at `target/debug/worker`.
|
||||
|
||||
2. **Running the worker:**
|
||||
The worker is not typically run manually. The `launcher` crate is responsible for spawning it with the correct arguments. If you need to run it manually for testing, you must provide the required arguments:
|
||||
```bash
|
||||
./target/debug/worker --redis-url redis://127.0.0.1/ --circle-public-key <a_valid_hex_public_key>
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
Key dependencies include:
|
||||
- `redis`: For asynchronous Redis communication.
|
||||
- `rhai`: The Rhai script engine.
|
||||
- `clap`: For command-line argument parsing.
|
||||
- `tokio`: For the asynchronous runtime.
|
||||
- `log`, `env_logger`: For logging.
|
113
core/worker/cmd/README.md
Normal file
113
core/worker/cmd/README.md
Normal file
@@ -0,0 +1,113 @@
|
||||
# Rhai Worker Binary
|
||||
|
||||
A command-line worker for executing Rhai scripts from Redis task queues.
|
||||
|
||||
## Binary: `worker`
|
||||
|
||||
### Installation
|
||||
|
||||
Build the binary:
|
||||
```bash
|
||||
cargo build --bin worker --release
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
```bash
|
||||
# Basic usage - requires circle public key
|
||||
worker --circle-public-key <CIRCLE_PUBLIC_KEY>
|
||||
|
||||
# Custom Redis URL
|
||||
worker -c <CIRCLE_PUBLIC_KEY> --redis-url redis://localhost:6379/1
|
||||
|
||||
# Custom worker ID and database path
|
||||
worker -c <CIRCLE_PUBLIC_KEY> --worker-id my_worker --db-path /tmp/worker_db
|
||||
|
||||
# Preserve tasks for debugging/benchmarking
|
||||
worker -c <CIRCLE_PUBLIC_KEY> --preserve-tasks
|
||||
|
||||
# Remove timestamps from logs
|
||||
worker -c <CIRCLE_PUBLIC_KEY> --no-timestamp
|
||||
|
||||
# Increase verbosity
|
||||
worker -c <CIRCLE_PUBLIC_KEY> -v # Debug logging
|
||||
worker -c <CIRCLE_PUBLIC_KEY> -vv # Full debug
|
||||
worker -c <CIRCLE_PUBLIC_KEY> -vvv # Trace logging
|
||||
```
|
||||
|
||||
### Command-Line Options
|
||||
|
||||
| Option | Short | Default | Description |
|
||||
|--------|-------|---------|-------------|
|
||||
| `--circle-public-key` | `-c` | **Required** | Circle public key to listen for tasks |
|
||||
| `--redis-url` | `-r` | `redis://localhost:6379` | Redis connection URL |
|
||||
| `--worker-id` | `-w` | `worker_1` | Unique worker identifier |
|
||||
| `--preserve-tasks` | | `false` | Preserve task details after completion |
|
||||
| `--db-path` | | `worker_rhai_temp_db` | Database path for Rhai engine |
|
||||
| `--no-timestamp` | | `false` | Remove timestamps from log output |
|
||||
| `--verbose` | `-v` | | Increase verbosity (stackable) |
|
||||
|
||||
### Features
|
||||
|
||||
- **Task Queue Processing**: Listens to Redis queues for Rhai script execution tasks
|
||||
- **Performance Optimized**: Configured for maximum Rhai engine performance
|
||||
- **Graceful Shutdown**: Supports shutdown signals for clean termination
|
||||
- **Flexible Logging**: Configurable verbosity and timestamp control
|
||||
- **Database Integration**: Uses heromodels for data persistence
|
||||
- **Task Cleanup**: Optional task preservation for debugging/benchmarking
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Queue Listening**: Worker listens on Redis queue `rhailib:{circle_public_key}`
|
||||
2. **Task Processing**: Receives task IDs, fetches task details from Redis
|
||||
3. **Script Execution**: Executes Rhai scripts with configured engine
|
||||
4. **Result Handling**: Updates task status and sends results to reply queues
|
||||
5. **Cleanup**: Optionally cleans up task details after completion
|
||||
|
||||
### Configuration Examples
|
||||
|
||||
#### Development Worker
|
||||
```bash
|
||||
# Simple development worker
|
||||
worker -c dev_circle_123
|
||||
|
||||
# Development with verbose logging (no timestamps)
|
||||
worker -c dev_circle_123 -v --no-timestamp
|
||||
```
|
||||
|
||||
#### Production Worker
|
||||
```bash
|
||||
# Production worker with custom configuration
|
||||
worker \
|
||||
--circle-public-key prod_circle_456 \
|
||||
--redis-url redis://redis-server:6379/0 \
|
||||
--worker-id prod_worker_1 \
|
||||
--db-path /var/lib/worker/db \
|
||||
--preserve-tasks
|
||||
```
|
||||
|
||||
#### Benchmarking Worker
|
||||
```bash
|
||||
# Worker optimized for benchmarking
|
||||
worker \
|
||||
--circle-public-key bench_circle_789 \
|
||||
--preserve-tasks \
|
||||
--no-timestamp \
|
||||
-vv
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
The worker provides clear error messages for:
|
||||
- Missing or invalid circle public key
|
||||
- Redis connection failures
|
||||
- Script execution errors
|
||||
- Database access issues
|
||||
|
||||
### Dependencies
|
||||
|
||||
- `rhailib_engine`: Rhai engine with heromodels integration
|
||||
- `redis`: Redis client for task queue management
|
||||
- `rhai`: Script execution engine
|
||||
- `clap`: Command-line argument parsing
|
||||
- `env_logger`: Logging infrastructure
|
95
core/worker/cmd/worker.rs
Normal file
95
core/worker/cmd/worker.rs
Normal file
@@ -0,0 +1,95 @@
|
||||
use clap::Parser;
|
||||
use rhailib_engine::create_heromodels_engine;
|
||||
use rhailib_worker::spawn_rhai_worker;
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
#[derive(Parser, Debug)]
|
||||
#[command(author, version, about, long_about = None)]
|
||||
struct Args {
|
||||
/// Worker ID for identification
|
||||
#[arg(short, long)]
|
||||
worker_id: String,
|
||||
|
||||
/// Redis URL
|
||||
#[arg(short, long, default_value = "redis://localhost:6379")]
|
||||
redis_url: String,
|
||||
|
||||
/// Preserve task details after completion (for benchmarking)
|
||||
#[arg(long, default_value = "false")]
|
||||
preserve_tasks: bool,
|
||||
|
||||
/// Root directory for engine database
|
||||
#[arg(long, default_value = "worker_rhai_temp_db")]
|
||||
db_path: String,
|
||||
|
||||
/// Disable timestamps in log output
|
||||
#[arg(long, help = "Remove timestamps from log output")]
|
||||
no_timestamp: bool,
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
let args = Args::parse();
|
||||
|
||||
// Configure env_logger with or without timestamps
|
||||
if args.no_timestamp {
|
||||
env_logger::Builder::from_default_env()
|
||||
.format_timestamp(None)
|
||||
.init();
|
||||
} else {
|
||||
env_logger::init();
|
||||
}
|
||||
|
||||
|
||||
log::info!("Rhai Worker (binary) starting with performance-optimized engine.");
|
||||
log::info!(
|
||||
"Worker ID: {}, Redis: {}",
|
||||
args.worker_id,
|
||||
args.redis_url
|
||||
);
|
||||
|
||||
let mut engine = create_heromodels_engine();
|
||||
|
||||
// Performance optimizations for benchmarking
|
||||
engine.set_max_operations(0); // Unlimited operations for performance testing
|
||||
engine.set_max_expr_depths(0, 0); // Unlimited expression depth
|
||||
engine.set_max_string_size(0); // Unlimited string size
|
||||
engine.set_max_array_size(0); // Unlimited array size
|
||||
engine.set_max_map_size(0); // Unlimited map size
|
||||
|
||||
// Enable full optimization for maximum performance
|
||||
engine.set_optimization_level(rhai::OptimizationLevel::Full);
|
||||
|
||||
log::info!("Engine configured for maximum performance");
|
||||
|
||||
// Create shutdown channel (for graceful shutdown, though not used in benchmarks)
|
||||
let (_shutdown_tx, shutdown_rx) = mpsc::channel::<()>(1);
|
||||
|
||||
// Spawn the worker
|
||||
let worker_handle = spawn_rhai_worker(
|
||||
args.worker_id,
|
||||
args.db_path,
|
||||
engine,
|
||||
args.redis_url,
|
||||
shutdown_rx,
|
||||
args.preserve_tasks,
|
||||
);
|
||||
|
||||
// Wait for the worker to complete
|
||||
match worker_handle.await {
|
||||
Ok(result) => match result {
|
||||
Ok(_) => {
|
||||
log::info!("Worker completed successfully");
|
||||
Ok(())
|
||||
}
|
||||
Err(e) => {
|
||||
log::error!("Worker failed: {}", e);
|
||||
Err(e)
|
||||
}
|
||||
},
|
||||
Err(e) => {
|
||||
log::error!("Worker task panicked: {}", e);
|
||||
Err(Box::new(e) as Box<dyn std::error::Error + Send + Sync>)
|
||||
}
|
||||
}
|
||||
}
|
53
core/worker/docs/ARCHITECTURE.md
Normal file
53
core/worker/docs/ARCHITECTURE.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Architecture of the `rhailib_worker` Crate
|
||||
|
||||
The `rhailib_worker` crate implements a distributed task execution system for Rhai scripts, providing scalable, reliable script processing through Redis-based task queues. Workers are decoupled from contexts, allowing a single worker to process tasks for multiple contexts (circles).
|
||||
|
||||
## Core Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Worker Process] --> B[Task Queue Processing]
|
||||
A --> C[Script Execution Engine]
|
||||
A --> D[Result Management]
|
||||
|
||||
B --> B1[Redis Queue Monitoring]
|
||||
B --> B2[Task Deserialization]
|
||||
B --> B3[Priority Handling]
|
||||
|
||||
C --> C1[Rhai Engine Integration]
|
||||
C --> C2[Context Management]
|
||||
C --> C3[Error Handling]
|
||||
|
||||
D --> D1[Result Serialization]
|
||||
D --> D2[Reply Queue Management]
|
||||
D --> D3[Status Updates]
|
||||
```
|
||||
|
||||
## Key Components
|
||||
|
||||
### Task Processing Pipeline
|
||||
- **Queue Monitoring**: Continuous Redis queue polling for new tasks
|
||||
- **Task Execution**: Secure Rhai script execution with proper context
|
||||
- **Result Handling**: Comprehensive result and error management
|
||||
|
||||
### Engine Integration
|
||||
- **Rhailib Engine**: Full integration with rhailib_engine for DSL access
|
||||
- **Context Injection**: Proper authentication and database context setup
|
||||
- **Security**: Isolated execution environment with access controls
|
||||
|
||||
### Scalability Features
|
||||
- **Horizontal Scaling**: Multiple worker instances for load distribution
|
||||
- **Queue-based Architecture**: Reliable task distribution via Redis
|
||||
- **Fault Tolerance**: Robust error handling and recovery mechanisms
|
||||
|
||||
## Dependencies
|
||||
|
||||
- **Redis Integration**: Task queue management and communication
|
||||
- **Rhai Engine**: Script execution with full DSL capabilities
|
||||
- **Client Integration**: Shared data structures with rhai_dispatcher
|
||||
- **Heromodels**: Database and business logic integration
|
||||
- **Async Runtime**: Tokio for high-performance concurrent processing
|
||||
|
||||
## Deployment Patterns
|
||||
|
||||
Workers can be deployed as standalone processes, containerized services, or embedded components, providing flexibility for various deployment scenarios from development to production.
|
259
core/worker/src/lib.rs
Normal file
259
core/worker/src/lib.rs
Normal file
@@ -0,0 +1,259 @@
|
||||
use chrono::Utc;
|
||||
use log::{debug, error, info};
|
||||
use redis::AsyncCommands;
|
||||
use rhai::{Dynamic, Engine};
|
||||
use rhai_dispatcher::RhaiTaskDetails; // Import for constructing the reply message
|
||||
use serde_json;
|
||||
use std::collections::HashMap;
|
||||
use tokio::sync::mpsc; // For shutdown signal
|
||||
use tokio::task::JoinHandle; // For serializing the reply message
|
||||
|
||||
const NAMESPACE_PREFIX: &str = "rhailib:";
|
||||
const BLPOP_TIMEOUT_SECONDS: usize = 5;
|
||||
|
||||
// This function updates specific fields in the Redis hash.
|
||||
// It doesn't need to know the full RhaiTaskDetails struct, only the field names.
|
||||
async fn update_task_status_in_redis(
|
||||
conn: &mut redis::aio::MultiplexedConnection,
|
||||
task_id: &str,
|
||||
status: &str,
|
||||
output: Option<String>,
|
||||
error_msg: Option<String>,
|
||||
) -> redis::RedisResult<()> {
|
||||
let task_key = format!("{}{}", NAMESPACE_PREFIX, task_id);
|
||||
let mut updates: Vec<(&str, String)> = vec![
|
||||
("status", status.to_string()),
|
||||
("updatedAt", Utc::now().timestamp().to_string()),
|
||||
];
|
||||
if let Some(out) = output {
|
||||
updates.push(("output", out));
|
||||
}
|
||||
if let Some(err) = error_msg {
|
||||
updates.push(("error", err));
|
||||
}
|
||||
debug!(
|
||||
"Updating task {} in Redis with status: {}, updates: {:?}",
|
||||
task_id, status, updates
|
||||
);
|
||||
conn.hset_multiple::<_, _, _, ()>(&task_key, &updates)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn spawn_rhai_worker(
|
||||
worker_id: String,
|
||||
db_path: String,
|
||||
mut engine: Engine,
|
||||
redis_url: String,
|
||||
mut shutdown_rx: mpsc::Receiver<()>, // Add shutdown receiver
|
||||
preserve_tasks: bool, // Flag to control task cleanup
|
||||
) -> JoinHandle<Result<(), Box<dyn std::error::Error + Send + Sync>>> {
|
||||
tokio::spawn(async move {
|
||||
let queue_key = format!("{}{}", NAMESPACE_PREFIX, worker_id);
|
||||
info!(
|
||||
"Rhai Worker for Worker ID '{}' starting. Connecting to Redis at {}. Listening on queue: {}. Waiting for tasks or shutdown signal.",
|
||||
worker_id, redis_url, queue_key
|
||||
);
|
||||
|
||||
let redis_client = match redis::Client::open(redis_url.as_str()) {
|
||||
Ok(client) => client,
|
||||
Err(e) => {
|
||||
error!(
|
||||
"Worker for Worker ID '{}': Failed to open Redis client: {}",
|
||||
worker_id, e
|
||||
);
|
||||
return Err(Box::new(e) as Box<dyn std::error::Error + Send + Sync>);
|
||||
}
|
||||
};
|
||||
let mut redis_conn = match redis_client.get_multiplexed_async_connection().await {
|
||||
Ok(conn) => conn,
|
||||
Err(e) => {
|
||||
error!(
|
||||
"Worker for Worker ID '{}': Failed to get Redis connection: {}",
|
||||
worker_id, e
|
||||
);
|
||||
return Err(Box::new(e) as Box<dyn std::error::Error + Send + Sync>);
|
||||
}
|
||||
};
|
||||
info!(
|
||||
"Worker for Worker ID '{}' successfully connected to Redis.",
|
||||
worker_id
|
||||
);
|
||||
|
||||
loop {
|
||||
let blpop_keys = vec![queue_key.clone()];
|
||||
tokio::select! {
|
||||
// Listen for shutdown signal
|
||||
_ = shutdown_rx.recv() => {
|
||||
info!("Worker for Worker ID '{}': Shutdown signal received. Terminating loop.", worker_id.clone());
|
||||
break;
|
||||
}
|
||||
// Listen for tasks from Redis
|
||||
blpop_result = redis_conn.blpop(&blpop_keys, BLPOP_TIMEOUT_SECONDS as f64) => {
|
||||
debug!("Worker for Worker ID '{}': Attempting BLPOP on queue: {}", worker_id.clone(), queue_key);
|
||||
let response: Option<(String, String)> = match blpop_result {
|
||||
Ok(resp) => resp,
|
||||
Err(e) => {
|
||||
error!("Worker '{}': Redis BLPOP error on queue {}: {}. Worker for this circle might stop.", worker_id, queue_key, e);
|
||||
return Err(Box::new(e) as Box<dyn std::error::Error + Send + Sync>);
|
||||
}
|
||||
};
|
||||
|
||||
if let Some((_queue_name_recv, task_id)) = response {
|
||||
info!("Worker '{}' received task_id: {} from queue: {}", worker_id, task_id, _queue_name_recv);
|
||||
debug!("Worker '{}', Task {}: Processing started.", worker_id, task_id);
|
||||
|
||||
let task_details_key = format!("{}{}", NAMESPACE_PREFIX, task_id);
|
||||
debug!("Worker '{}', Task {}: Attempting HGETALL from key: {}", worker_id, task_id, task_details_key);
|
||||
|
||||
let task_details_map_result: Result<HashMap<String, String>, _> =
|
||||
redis_conn.hgetall(&task_details_key).await;
|
||||
|
||||
match task_details_map_result {
|
||||
Ok(details_map) => {
|
||||
debug!("Worker '{}', Task {}: HGETALL successful. Details: {:?}", worker_id, task_id, details_map);
|
||||
let script_content_opt = details_map.get("script").cloned();
|
||||
let created_at_str_opt = details_map.get("createdAt").cloned();
|
||||
let caller_id = details_map.get("callerId").cloned().expect("callerId field missing from Redis hash");
|
||||
|
||||
let context_id = details_map.get("contextId").cloned().expect("contextId field missing from Redis hash");
|
||||
if context_id.is_empty() {
|
||||
error!("Worker '{}', Task {}: contextId field missing from Redis hash", worker_id, task_id);
|
||||
return Err("contextId field missing from Redis hash".into());
|
||||
}
|
||||
if caller_id.is_empty() {
|
||||
error!("Worker '{}', Task {}: callerId field missing from Redis hash", worker_id, task_id);
|
||||
return Err("callerId field missing from Redis hash".into());
|
||||
}
|
||||
|
||||
if let Some(script_content) = script_content_opt {
|
||||
info!("Worker '{}' processing task_id: {}. Script: {:.50}...", context_id, task_id, script_content);
|
||||
debug!("Worker for Context ID '{}', Task {}: Attempting to update status to 'processing'.", context_id, task_id);
|
||||
if let Err(e) = update_task_status_in_redis(&mut redis_conn, &task_id, "processing", None, None).await {
|
||||
error!("Worker for Context ID '{}', Task {}: Failed to update status to 'processing': {}", context_id, task_id, e);
|
||||
} else {
|
||||
debug!("Worker for Context ID '{}', Task {}: Status updated to 'processing'.", context_id, task_id);
|
||||
}
|
||||
|
||||
let mut db_config = rhai::Map::new();
|
||||
db_config.insert("DB_PATH".into(), db_path.clone().into());
|
||||
db_config.insert("CALLER_ID".into(), caller_id.clone().into());
|
||||
db_config.insert("CONTEXT_ID".into(), context_id.clone().into());
|
||||
engine.set_default_tag(Dynamic::from(db_config)); // Or pass via CallFnOptions
|
||||
|
||||
debug!("Worker for Context ID '{}', Task {}: Evaluating script with Rhai engine.", context_id, task_id);
|
||||
|
||||
let mut final_status = "error".to_string(); // Default to error
|
||||
let mut final_output: Option<String> = None;
|
||||
let mut final_error_msg: Option<String> = None;
|
||||
|
||||
match engine.eval::<rhai::Dynamic>(&script_content) {
|
||||
Ok(result) => {
|
||||
let output_str = if result.is::<String>() {
|
||||
// If the result is a string, we can unwrap it directly.
|
||||
// This moves `result`, which is fine because it's the last time we use it in this branch.
|
||||
result.into_string().unwrap()
|
||||
} else {
|
||||
result.to_string()
|
||||
};
|
||||
info!("Worker for Context ID '{}' task {} completed. Output: {}", context_id, task_id, output_str);
|
||||
final_status = "completed".to_string();
|
||||
final_output = Some(output_str);
|
||||
}
|
||||
Err(e) => {
|
||||
let error_str = format!("{:?}", *e);
|
||||
error!("Worker for Context ID '{}' task {} script evaluation failed. Error: {}", context_id, task_id, error_str);
|
||||
final_error_msg = Some(error_str);
|
||||
// final_status remains "error"
|
||||
}
|
||||
}
|
||||
|
||||
debug!("Worker for Context ID '{}', Task {}: Attempting to update status to '{}'.", context_id, task_id, final_status);
|
||||
if let Err(e) = update_task_status_in_redis(
|
||||
&mut redis_conn,
|
||||
&task_id,
|
||||
&final_status,
|
||||
final_output.clone(), // Clone for task hash update
|
||||
final_error_msg.clone(), // Clone for task hash update
|
||||
).await {
|
||||
error!("Worker for Context ID '{}', Task {}: Failed to update final status to '{}': {}", context_id, task_id, final_status, e);
|
||||
} else {
|
||||
debug!("Worker for Context ID '{}', Task {}: Final status updated to '{}'.", context_id, task_id, final_status);
|
||||
}
|
||||
|
||||
// Send to reply queue if specified
|
||||
|
||||
let created_at = created_at_str_opt
|
||||
.and_then(|s| chrono::DateTime::parse_from_rfc3339(&s).ok())
|
||||
.map(|dt| dt.with_timezone(&Utc))
|
||||
.unwrap_or_else(Utc::now); // Fallback, though createdAt should exist
|
||||
|
||||
let reply_details = RhaiTaskDetails {
|
||||
task_id: task_id.to_string(), // Add the task_id
|
||||
script: script_content.clone(), // Include script for context in reply
|
||||
status: final_status, // The final status
|
||||
output: final_output, // The final output
|
||||
error: final_error_msg, // The final error
|
||||
created_at, // Original creation time
|
||||
updated_at: Utc::now(), // Time of this final update/reply
|
||||
caller_id: caller_id.clone(),
|
||||
context_id: context_id.clone(),
|
||||
worker_id: worker_id.clone(),
|
||||
};
|
||||
let reply_queue_key = format!("{}:reply:{}", NAMESPACE_PREFIX, task_id);
|
||||
match serde_json::to_string(&reply_details) {
|
||||
Ok(reply_json) => {
|
||||
let lpush_result: redis::RedisResult<i64> = redis_conn.lpush(&reply_queue_key, &reply_json).await;
|
||||
match lpush_result {
|
||||
Ok(_) => debug!("Worker for Context ID '{}', Task {}: Successfully sent result to reply queue {}", context_id, task_id, reply_queue_key),
|
||||
Err(e_lpush) => error!("Worker for Context ID '{}', Task {}: Failed to LPUSH result to reply queue {}: {}", context_id, task_id, reply_queue_key, e_lpush),
|
||||
}
|
||||
}
|
||||
Err(e_json) => {
|
||||
error!("Worker for Context ID '{}', Task {}: Failed to serialize reply details for queue {}: {}", context_id, task_id, reply_queue_key, e_json);
|
||||
}
|
||||
}
|
||||
// Clean up task details based on preserve_tasks flag
|
||||
if !preserve_tasks {
|
||||
// The worker is responsible for cleaning up the task details hash.
|
||||
if let Err(e) = redis_conn.del::<_, ()>(&task_details_key).await {
|
||||
error!("Worker for Context ID '{}', Task {}: Failed to delete task details key '{}': {}", context_id, task_id, task_details_key, e);
|
||||
} else {
|
||||
debug!("Worker for Context ID '{}', Task {}: Cleaned up task details key '{}'.", context_id, task_id, task_details_key);
|
||||
}
|
||||
} else {
|
||||
debug!("Worker for Context ID '{}', Task {}: Preserving task details (preserve_tasks=true)", context_id, task_id);
|
||||
}
|
||||
} else { // Script content not found in hash
|
||||
error!(
|
||||
"Worker for Context ID '{}', Task {}: Script content not found in Redis hash. Details map: {:?}",
|
||||
context_id, task_id, details_map
|
||||
);
|
||||
// Clean up invalid task details based on preserve_tasks flag
|
||||
if !preserve_tasks {
|
||||
// Even if the script is not found, the worker should clean up the invalid task hash.
|
||||
if let Err(e) = redis_conn.del::<_, ()>(&task_details_key).await {
|
||||
error!("Worker for Context ID '{}', Task {}: Failed to delete invalid task details key '{}': {}", context_id, task_id, task_details_key, e);
|
||||
}
|
||||
} else {
|
||||
debug!("Worker for Context ID '{}', Task {}: Preserving invalid task details (preserve_tasks=true)", context_id, task_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
error!(
|
||||
"Worker '{}', Task {}: Failed to fetch details (HGETALL) from Redis for key {}. Error: {:?}",
|
||||
worker_id, task_id, task_details_key, e
|
||||
);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
debug!("Worker '{}': BLPOP timed out on queue {}. No new tasks. Checking for shutdown signal again.", &worker_id, &queue_key);
|
||||
}
|
||||
} // End of blpop_result match
|
||||
} // End of tokio::select!
|
||||
} // End of loop
|
||||
info!("Worker '{}' has shut down.", worker_id);
|
||||
Ok(())
|
||||
})
|
||||
}
|
Reference in New Issue
Block a user