Compare commits
4 Commits
ce1be0369a
...
vector
Author | SHA1 | Date | |
---|---|---|---|
9410176684 | |||
ab56fad635 | |||
a1127b72da | |||
3850df89be |
4894
Cargo.lock
generated
4894
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
13
Cargo.toml
13
Cargo.toml
@@ -24,7 +24,18 @@ age = "0.10"
|
||||
secrecy = "0.8"
|
||||
ed25519-dalek = "2"
|
||||
base64 = "0.22"
|
||||
tantivy = "0.25.0"
|
||||
# Lance vector database dependencies
|
||||
lance = "0.33"
|
||||
lance-index = "0.33"
|
||||
lance-linalg = "0.33"
|
||||
# Use Arrow version compatible with Lance 0.33
|
||||
arrow = "55.2"
|
||||
arrow-array = "55.2"
|
||||
arrow-schema = "55.2"
|
||||
parquet = "55.2"
|
||||
uuid = { version = "1.10", features = ["v4"] }
|
||||
reqwest = { version = "0.11", features = ["json"] }
|
||||
image = "0.25"
|
||||
|
||||
[dev-dependencies]
|
||||
redis = { version = "0.24", features = ["aio", "tokio-comp"] }
|
||||
|
454
docs/lance_vector_db.md
Normal file
454
docs/lance_vector_db.md
Normal file
@@ -0,0 +1,454 @@
|
||||
# Lance Vector Database Operations
|
||||
|
||||
HeroDB includes a powerful vector database integration using Lance, enabling high-performance vector storage, search, and multimodal data management. By default, it uses Ollama for local text embeddings, with support for custom external embedding services.
|
||||
|
||||
## Overview
|
||||
|
||||
The Lance vector database integration provides:
|
||||
|
||||
- **High-performance vector storage** using Lance's columnar format
|
||||
- **Local Ollama integration** for text embeddings (default, no external dependencies)
|
||||
- **Custom embedding service support** for advanced use cases
|
||||
- **Text embedding support** (images via custom services)
|
||||
- **Vector similarity search** with configurable parameters
|
||||
- **Scalable indexing** with IVF_PQ (Inverted File with Product Quantization)
|
||||
- **Redis-compatible command interface**
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ HeroDB │ │ External │ │ Lance │
|
||||
│ Redis Server │◄──►│ Embedding │ │ Vector Store │
|
||||
│ │ │ Service │ │ │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
│ │ │
|
||||
│ │ │
|
||||
Redis Protocol HTTP API Arrow/Parquet
|
||||
Commands JSON Requests Columnar Storage
|
||||
```
|
||||
|
||||
### Key Components
|
||||
|
||||
1. **Lance Store**: High-performance columnar vector storage
|
||||
2. **Ollama Integration**: Local embedding service (default)
|
||||
3. **Custom Embedding Service**: Optional HTTP API for advanced use cases
|
||||
4. **Redis Command Interface**: Familiar Redis-style commands
|
||||
5. **Arrow Schema**: Flexible schema definition for metadata
|
||||
|
||||
## Configuration
|
||||
|
||||
### Default Setup (Ollama)
|
||||
|
||||
HeroDB uses Ollama by default for text embeddings. No configuration is required if Ollama is running locally:
|
||||
|
||||
```bash
|
||||
# Install Ollama (if not already installed)
|
||||
# Visit: https://ollama.ai
|
||||
|
||||
# Pull the embedding model
|
||||
ollama pull nomic-embed-text
|
||||
|
||||
# Ollama automatically runs on localhost:11434
|
||||
# HeroDB will use this by default
|
||||
```
|
||||
|
||||
**Default Configuration:**
|
||||
- **URL**: `http://localhost:11434`
|
||||
- **Model**: `nomic-embed-text`
|
||||
- **Dimensions**: 768 (for nomic-embed-text)
|
||||
|
||||
### Custom Embedding Service (Optional)
|
||||
|
||||
To use a custom embedding service instead of Ollama:
|
||||
|
||||
```bash
|
||||
# Set custom embedding service URL
|
||||
redis-cli HSET config:core:aiembed url "http://your-embedding-service:8080/embed"
|
||||
|
||||
# Optional: Set authentication if required
|
||||
redis-cli HSET config:core:aiembed token "your-api-token"
|
||||
```
|
||||
|
||||
### Embedding Service API Contracts
|
||||
|
||||
#### Ollama API (Default)
|
||||
HeroDB calls Ollama using this format:
|
||||
|
||||
```bash
|
||||
POST http://localhost:11434/api/embeddings
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"model": "nomic-embed-text",
|
||||
"prompt": "Your text to embed"
|
||||
}
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"embedding": [0.1, 0.2, 0.3, ...]
|
||||
}
|
||||
```
|
||||
|
||||
#### Custom Service API
|
||||
Your custom embedding service should accept POST requests with this JSON format:
|
||||
|
||||
```json
|
||||
{
|
||||
"texts": ["text1", "text2"], // Optional: array of texts
|
||||
"images": ["base64_image1", "base64_image2"], // Optional: base64 encoded images
|
||||
"model": "your-model-name" // Optional: model specification
|
||||
}
|
||||
```
|
||||
|
||||
And return responses in this format:
|
||||
|
||||
```json
|
||||
{
|
||||
"embeddings": [[0.1, 0.2, ...], [0.3, 0.4, ...]], // Array of embedding vectors
|
||||
"model": "model-name", // Model used
|
||||
"usage": { // Optional usage stats
|
||||
"tokens": 100,
|
||||
"requests": 2
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Commands Reference
|
||||
|
||||
### Dataset Management
|
||||
|
||||
#### LANCE CREATE
|
||||
Create a new vector dataset with specified dimensions and optional schema.
|
||||
|
||||
```bash
|
||||
LANCE CREATE <dataset> DIM <dimension> [SCHEMA field:type ...]
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `dataset`: Name of the dataset
|
||||
- `dimension`: Vector dimension (e.g., 384, 768, 1536)
|
||||
- `field:type`: Optional metadata fields (string, int, float, bool)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Create a simple dataset for 384-dimensional vectors
|
||||
LANCE CREATE documents DIM 384
|
||||
|
||||
# Create dataset with metadata schema
|
||||
LANCE CREATE products DIM 768 SCHEMA category:string price:float available:bool
|
||||
```
|
||||
|
||||
#### LANCE LIST
|
||||
List all available datasets.
|
||||
|
||||
```bash
|
||||
LANCE LIST
|
||||
```
|
||||
|
||||
**Returns:** Array of dataset names
|
||||
|
||||
#### LANCE INFO
|
||||
Get information about a specific dataset.
|
||||
|
||||
```bash
|
||||
LANCE INFO <dataset>
|
||||
```
|
||||
|
||||
**Returns:** Dataset metadata including name, version, row count, and schema
|
||||
|
||||
#### LANCE DROP
|
||||
Delete a dataset and all its data.
|
||||
|
||||
```bash
|
||||
LANCE DROP <dataset>
|
||||
```
|
||||
|
||||
### Data Operations
|
||||
|
||||
#### LANCE STORE
|
||||
Store multimodal data (text/images) with automatic embedding generation.
|
||||
|
||||
```bash
|
||||
LANCE STORE <dataset> [TEXT <text>] [IMAGE <base64>] [key value ...]
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `dataset`: Target dataset name
|
||||
- `TEXT`: Text content to embed
|
||||
- `IMAGE`: Base64-encoded image to embed
|
||||
- `key value`: Metadata key-value pairs
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Store text with metadata
|
||||
LANCE STORE documents TEXT "Machine learning is transforming industries" category "AI" author "John Doe"
|
||||
|
||||
# Store image with metadata
|
||||
LANCE STORE images IMAGE "iVBORw0KGgoAAAANSUhEUgAA..." category "nature" tags "landscape,mountains"
|
||||
|
||||
# Store both text and image
|
||||
LANCE STORE multimodal TEXT "Beautiful sunset" IMAGE "base64data..." location "California"
|
||||
```
|
||||
|
||||
**Returns:** Unique ID of the stored item
|
||||
|
||||
### Search Operations
|
||||
|
||||
#### LANCE SEARCH
|
||||
Search using a raw vector.
|
||||
|
||||
```bash
|
||||
LANCE SEARCH <dataset> VECTOR <vector> K <k> [NPROBES <n>] [REFINE <r>]
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `dataset`: Dataset to search
|
||||
- `vector`: Comma-separated vector values (e.g., "0.1,0.2,0.3")
|
||||
- `k`: Number of results to return
|
||||
- `NPROBES`: Number of partitions to search (optional)
|
||||
- `REFINE`: Refine factor for better accuracy (optional)
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
LANCE SEARCH documents VECTOR "0.1,0.2,0.3,0.4" K 5 NPROBES 10
|
||||
```
|
||||
|
||||
#### LANCE SEARCH.TEXT
|
||||
Search using text query (automatically embedded).
|
||||
|
||||
```bash
|
||||
LANCE SEARCH.TEXT <dataset> <query_text> K <k> [NPROBES <n>] [REFINE <r>]
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `dataset`: Dataset to search
|
||||
- `query_text`: Text query to search for
|
||||
- `k`: Number of results to return
|
||||
- `NPROBES`: Number of partitions to search (optional)
|
||||
- `REFINE`: Refine factor for better accuracy (optional)
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
LANCE SEARCH.TEXT documents "artificial intelligence applications" K 10 NPROBES 20
|
||||
```
|
||||
|
||||
**Returns:** Array of results with distance scores and metadata
|
||||
|
||||
### Embedding Operations
|
||||
|
||||
#### LANCE EMBED.TEXT
|
||||
Generate embeddings for text without storing.
|
||||
|
||||
```bash
|
||||
LANCE EMBED.TEXT <text1> [text2] [text3] ...
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
LANCE EMBED.TEXT "Hello world" "Machine learning" "Vector database"
|
||||
```
|
||||
|
||||
**Returns:** Array of embedding vectors
|
||||
|
||||
### Index Management
|
||||
|
||||
#### LANCE CREATE.INDEX
|
||||
Create a vector index for faster search performance.
|
||||
|
||||
```bash
|
||||
LANCE CREATE.INDEX <dataset> <index_type> [PARTITIONS <n>] [SUBVECTORS <n>]
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `dataset`: Dataset to index
|
||||
- `index_type`: Index type (currently supports "IVF_PQ")
|
||||
- `PARTITIONS`: Number of partitions (default: 256)
|
||||
- `SUBVECTORS`: Number of sub-vectors for PQ (default: 16)
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
LANCE CREATE.INDEX documents IVF_PQ PARTITIONS 512 SUBVECTORS 32
|
||||
```
|
||||
|
||||
## Usage Patterns
|
||||
|
||||
### 1. Document Search System
|
||||
|
||||
```bash
|
||||
# Setup
|
||||
LANCE CREATE documents DIM 384 SCHEMA title:string content:string category:string
|
||||
|
||||
# Store documents
|
||||
LANCE STORE documents TEXT "Introduction to machine learning algorithms" title "ML Basics" category "education"
|
||||
LANCE STORE documents TEXT "Deep learning neural networks explained" title "Deep Learning" category "education"
|
||||
LANCE STORE documents TEXT "Building scalable web applications" title "Web Dev" category "programming"
|
||||
|
||||
# Create index for better performance
|
||||
LANCE CREATE.INDEX documents IVF_PQ PARTITIONS 256
|
||||
|
||||
# Search
|
||||
LANCE SEARCH.TEXT documents "neural networks" K 5
|
||||
```
|
||||
|
||||
### 2. Image Similarity Search
|
||||
|
||||
```bash
|
||||
# Setup
|
||||
LANCE CREATE images DIM 512 SCHEMA filename:string tags:string
|
||||
|
||||
# Store images (base64 encoded)
|
||||
LANCE STORE images IMAGE "iVBORw0KGgoAAAANSUhEUgAA..." filename "sunset.jpg" tags "nature,landscape"
|
||||
LANCE STORE images IMAGE "iVBORw0KGgoAAAANSUhEUgBB..." filename "city.jpg" tags "urban,architecture"
|
||||
|
||||
# Search by image
|
||||
LANCE STORE temp_search IMAGE "query_image_base64..."
|
||||
# Then use the returned ID to get embedding and search
|
||||
```
|
||||
|
||||
### 3. Multimodal Content Management
|
||||
|
||||
```bash
|
||||
# Setup
|
||||
LANCE CREATE content DIM 768 SCHEMA type:string source:string
|
||||
|
||||
# Store mixed content
|
||||
LANCE STORE content TEXT "Product description for smartphone" type "product" source "catalog"
|
||||
LANCE STORE content IMAGE "product_image_base64..." type "product_image" source "catalog"
|
||||
|
||||
# Search across all content types
|
||||
LANCE SEARCH.TEXT content "smartphone features" K 10
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Vector Dimensions
|
||||
- **384**: Good for general text (e.g., sentence-transformers)
|
||||
- **768**: Standard for BERT-like models
|
||||
- **1536**: OpenAI text-embedding-ada-002
|
||||
- **Higher dimensions**: Better accuracy but slower search
|
||||
|
||||
### Index Configuration
|
||||
- **More partitions**: Better for larger datasets (>100K vectors)
|
||||
- **More sub-vectors**: Better compression but slower search
|
||||
- **NPROBES**: Higher values = better accuracy, slower search
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Create indexes** for datasets with >1000 vectors
|
||||
2. **Use appropriate dimensions** based on your embedding model
|
||||
3. **Configure NPROBES** based on accuracy vs speed requirements
|
||||
4. **Batch operations** when possible for better performance
|
||||
5. **Monitor embedding service** response times and rate limits
|
||||
|
||||
## Error Handling
|
||||
|
||||
Common error scenarios and solutions:
|
||||
|
||||
### Embedding Service Errors
|
||||
```bash
|
||||
# Error: Embedding service not configured
|
||||
ERR Embedding service URL not configured. Set it with: HSET config:core:aiembed url <YOUR_EMBEDDING_SERVICE_URL>
|
||||
|
||||
# Error: Service unavailable
|
||||
ERR Embedding service returned error 404 Not Found
|
||||
```
|
||||
|
||||
**Solution:** Ensure embedding service is running and URL is correct.
|
||||
|
||||
### Dataset Errors
|
||||
```bash
|
||||
# Error: Dataset doesn't exist
|
||||
ERR Dataset 'mydata' does not exist
|
||||
|
||||
# Error: Dimension mismatch
|
||||
ERR Vector dimension mismatch: expected 384, got 768
|
||||
```
|
||||
|
||||
**Solution:** Create dataset first or check vector dimensions.
|
||||
|
||||
### Search Errors
|
||||
```bash
|
||||
# Error: Invalid vector format
|
||||
ERR Invalid vector format
|
||||
|
||||
# Error: No index available
|
||||
ERR No index available for fast search
|
||||
```
|
||||
|
||||
**Solution:** Check vector format or create an index.
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### With Python
|
||||
```python
|
||||
import redis
|
||||
import json
|
||||
|
||||
r = redis.Redis(host='localhost', port=6379)
|
||||
|
||||
# Create dataset
|
||||
r.execute_command('LANCE', 'CREATE', 'docs', 'DIM', '384')
|
||||
|
||||
# Store document
|
||||
result = r.execute_command('LANCE', 'STORE', 'docs',
|
||||
'TEXT', 'Machine learning tutorial',
|
||||
'category', 'education')
|
||||
print(f"Stored with ID: {result}")
|
||||
|
||||
# Search
|
||||
results = r.execute_command('LANCE', 'SEARCH.TEXT', 'docs',
|
||||
'machine learning', 'K', '5')
|
||||
print(f"Search results: {results}")
|
||||
```
|
||||
|
||||
### With Node.js
|
||||
```javascript
|
||||
const redis = require('redis');
|
||||
const client = redis.createClient();
|
||||
|
||||
// Create dataset
|
||||
await client.sendCommand(['LANCE', 'CREATE', 'docs', 'DIM', '384']);
|
||||
|
||||
// Store document
|
||||
const id = await client.sendCommand(['LANCE', 'STORE', 'docs',
|
||||
'TEXT', 'Deep learning guide',
|
||||
'category', 'AI']);
|
||||
|
||||
// Search
|
||||
const results = await client.sendCommand(['LANCE', 'SEARCH.TEXT', 'docs',
|
||||
'deep learning', 'K', '10']);
|
||||
```
|
||||
|
||||
## Monitoring and Maintenance
|
||||
|
||||
### Health Checks
|
||||
```bash
|
||||
# Check if Lance store is available
|
||||
LANCE LIST
|
||||
|
||||
# Check dataset health
|
||||
LANCE INFO mydataset
|
||||
|
||||
# Test embedding service
|
||||
LANCE EMBED.TEXT "test"
|
||||
```
|
||||
|
||||
### Maintenance Operations
|
||||
```bash
|
||||
# Backup: Use standard Redis backup procedures
|
||||
# The Lance data is stored separately in the data directory
|
||||
|
||||
# Cleanup: Remove unused datasets
|
||||
LANCE DROP old_dataset
|
||||
|
||||
# Reindex: Drop and recreate indexes if needed
|
||||
LANCE DROP dataset_name
|
||||
LANCE CREATE dataset_name DIM 384
|
||||
# Re-import data
|
||||
LANCE CREATE.INDEX dataset_name IVF_PQ
|
||||
```
|
||||
|
||||
This integration provides a powerful foundation for building AI-powered applications with vector search capabilities while maintaining the familiar Redis interface.
|
@@ -1,6 +1,191 @@
|
||||
# HeroDB Tantivy Search Examples
|
||||
# HeroDB Examples
|
||||
|
||||
This directory contains examples demonstrating HeroDB's full-text search capabilities powered by Tantivy.
|
||||
This directory contains examples demonstrating HeroDB's capabilities including full-text search powered by Tantivy and vector database operations using Lance.
|
||||
|
||||
## Available Examples
|
||||
|
||||
1. **[Tantivy Search Demo](#tantivy-search-demo-bash-script)** - Full-text search capabilities
|
||||
2. **[Lance Vector Database Demo](#lance-vector-database-demo-bash-script)** - Vector database and AI operations
|
||||
3. **[AGE Encryption Demo](age_bash_demo.sh)** - Cryptographic operations
|
||||
4. **[Simple Demo](simple_demo.sh)** - Basic Redis operations
|
||||
|
||||
---
|
||||
|
||||
## Lance Vector Database Demo (Bash Script)
|
||||
|
||||
### Overview
|
||||
The `lance_vector_demo.sh` script provides a comprehensive demonstration of HeroDB's vector database capabilities using Lance. It showcases vector storage, similarity search, multimodal data handling, and AI-powered operations with external embedding services.
|
||||
|
||||
### Prerequisites
|
||||
1. **HeroDB Server**: The server must be running (default port 6379)
|
||||
2. **Redis CLI**: The `redis-cli` tool must be installed and available in your PATH
|
||||
3. **Embedding Service** (optional): For full functionality, set up an external embedding service
|
||||
|
||||
### Running the Demo
|
||||
|
||||
#### Step 1: Start HeroDB Server
|
||||
```bash
|
||||
# From the project root directory
|
||||
cargo run -- --dir ./test_data --port 6379
|
||||
```
|
||||
|
||||
#### Step 2: Run the Demo (in a new terminal)
|
||||
```bash
|
||||
# From the project root directory
|
||||
./examples/lance_vector_demo.sh
|
||||
```
|
||||
|
||||
### What the Demo Covers
|
||||
|
||||
The script demonstrates comprehensive vector database operations:
|
||||
|
||||
1. **Dataset Management**
|
||||
- Creating vector datasets with custom dimensions
|
||||
- Defining schemas with metadata fields
|
||||
- Listing and inspecting datasets
|
||||
- Dataset information and statistics
|
||||
|
||||
2. **Embedding Operations**
|
||||
- Text embedding generation via external services
|
||||
- Multimodal embedding support (text + images)
|
||||
- Batch embedding operations
|
||||
|
||||
3. **Data Storage**
|
||||
- Storing text documents with automatic embedding
|
||||
- Storing images with metadata
|
||||
- Multimodal content storage
|
||||
- Rich metadata support
|
||||
|
||||
4. **Vector Search**
|
||||
- Similarity search with raw vectors
|
||||
- Text-based semantic search
|
||||
- Configurable search parameters (K, NPROBES, REFINE)
|
||||
- Cross-modal search capabilities
|
||||
|
||||
5. **Index Management**
|
||||
- Creating IVF_PQ indexes for performance
|
||||
- Custom index parameters
|
||||
- Performance optimization
|
||||
|
||||
6. **Advanced Features**
|
||||
- Error handling and recovery
|
||||
- Performance testing concepts
|
||||
- Monitoring and maintenance
|
||||
- Cleanup operations
|
||||
|
||||
### Key Lance Commands Demonstrated
|
||||
|
||||
#### Dataset Management
|
||||
```bash
|
||||
# Create vector dataset
|
||||
LANCE CREATE documents DIM 384
|
||||
|
||||
# Create dataset with schema
|
||||
LANCE CREATE products DIM 768 SCHEMA category:string price:float available:bool
|
||||
|
||||
# List datasets
|
||||
LANCE LIST
|
||||
|
||||
# Get dataset information
|
||||
LANCE INFO documents
|
||||
```
|
||||
|
||||
#### Data Operations
|
||||
```bash
|
||||
# Store text with metadata
|
||||
LANCE STORE documents TEXT "Machine learning tutorial" category "education" author "John Doe"
|
||||
|
||||
# Store image with metadata
|
||||
LANCE STORE images IMAGE "base64_encoded_image..." filename "photo.jpg" tags "nature,landscape"
|
||||
|
||||
# Store multimodal content
|
||||
LANCE STORE content TEXT "Product description" IMAGE "base64_image..." type "product"
|
||||
```
|
||||
|
||||
#### Search Operations
|
||||
```bash
|
||||
# Search with raw vector
|
||||
LANCE SEARCH documents VECTOR "0.1,0.2,0.3,0.4" K 5
|
||||
|
||||
# Semantic text search
|
||||
LANCE SEARCH.TEXT documents "artificial intelligence" K 10 NPROBES 20
|
||||
|
||||
# Generate embeddings
|
||||
LANCE EMBED.TEXT "Hello world" "Machine learning"
|
||||
```
|
||||
|
||||
#### Index Management
|
||||
```bash
|
||||
# Create performance index
|
||||
LANCE CREATE.INDEX documents IVF_PQ PARTITIONS 256 SUBVECTORS 16
|
||||
|
||||
# Drop dataset
|
||||
LANCE DROP old_dataset
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
#### Setting Up Embedding Service
|
||||
```bash
|
||||
# Configure embedding service URL
|
||||
redis-cli HSET config:core:aiembed url "http://your-embedding-service:8080/embed"
|
||||
|
||||
# Optional: Set authentication token
|
||||
redis-cli HSET config:core:aiembed token "your-api-token"
|
||||
```
|
||||
|
||||
#### Embedding Service API
|
||||
Your embedding service should accept POST requests:
|
||||
```json
|
||||
{
|
||||
"texts": ["text1", "text2"],
|
||||
"images": ["base64_image1", "base64_image2"],
|
||||
"model": "your-model-name"
|
||||
}
|
||||
```
|
||||
|
||||
And return responses:
|
||||
```json
|
||||
{
|
||||
"embeddings": [[0.1, 0.2, ...], [0.3, 0.4, ...]],
|
||||
"model": "model-name",
|
||||
"usage": {"tokens": 100, "requests": 2}
|
||||
}
|
||||
```
|
||||
|
||||
### Interactive Features
|
||||
|
||||
The demo script includes:
|
||||
- **Colored output** for better readability
|
||||
- **Step-by-step execution** with explanations
|
||||
- **Error handling** demonstrations
|
||||
- **Automatic cleanup** options
|
||||
- **Performance testing** concepts
|
||||
- **Real-world usage** examples
|
||||
|
||||
### Use Cases Demonstrated
|
||||
|
||||
1. **Document Search System**
|
||||
- Semantic document retrieval
|
||||
- Metadata filtering
|
||||
- Relevance ranking
|
||||
|
||||
2. **Image Similarity Search**
|
||||
- Visual content matching
|
||||
- Tag-based filtering
|
||||
- Multimodal queries
|
||||
|
||||
3. **Product Recommendations**
|
||||
- Feature-based similarity
|
||||
- Category filtering
|
||||
- Price range queries
|
||||
|
||||
4. **Content Management**
|
||||
- Mixed media storage
|
||||
- Cross-modal search
|
||||
- Rich metadata support
|
||||
|
||||
---
|
||||
|
||||
## Tantivy Search Demo (Bash Script)
|
||||
|
||||
|
426
examples/lance_vector_demo.sh
Executable file
426
examples/lance_vector_demo.sh
Executable file
@@ -0,0 +1,426 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Lance Vector Database Demo Script
|
||||
# This script demonstrates all Lance vector database operations in HeroDB
|
||||
|
||||
set -e # Exit on any error
|
||||
|
||||
# Configuration
|
||||
REDIS_HOST="localhost"
|
||||
REDIS_PORT="6379"
|
||||
REDIS_CLI="redis-cli -h $REDIS_HOST -p $REDIS_PORT"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Helper functions
|
||||
log_info() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
execute_command() {
|
||||
local cmd="$1"
|
||||
local description="$2"
|
||||
|
||||
echo
|
||||
log_info "Executing: $description"
|
||||
echo "Command: $cmd"
|
||||
|
||||
if result=$($cmd 2>&1); then
|
||||
log_success "Result: $result"
|
||||
else
|
||||
log_error "Failed: $result"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check if HeroDB is running
|
||||
check_herodb() {
|
||||
log_info "Checking if HeroDB is running..."
|
||||
if ! $REDIS_CLI ping > /dev/null 2>&1; then
|
||||
log_error "HeroDB is not running. Please start it first:"
|
||||
echo " cargo run -- --dir ./test_data --port $REDIS_PORT"
|
||||
exit 1
|
||||
fi
|
||||
log_success "HeroDB is running"
|
||||
}
|
||||
|
||||
# Setup embedding service configuration
|
||||
setup_embedding_service() {
|
||||
log_info "Setting up embedding service configuration..."
|
||||
|
||||
# Note: This is a mock URL for demonstration
|
||||
# In production, replace with your actual embedding service
|
||||
execute_command \
|
||||
"$REDIS_CLI HSET config:core:aiembed url 'http://localhost:8080/embed'" \
|
||||
"Configure embedding service URL"
|
||||
|
||||
# Optional: Set authentication token
|
||||
# execute_command \
|
||||
# "$REDIS_CLI HSET config:core:aiembed token 'your-api-token'" \
|
||||
# "Configure embedding service token"
|
||||
|
||||
log_warning "Note: Embedding service at http://localhost:8080/embed is not running."
|
||||
log_warning "Some operations will fail, but this demonstrates the command structure."
|
||||
}
|
||||
|
||||
# Dataset Management Operations
|
||||
demo_dataset_management() {
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " DATASET MANAGEMENT DEMO"
|
||||
echo "=========================================="
|
||||
|
||||
# List datasets (should be empty initially)
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE LIST" \
|
||||
"List all datasets (initially empty)"
|
||||
|
||||
# Create a simple dataset
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE CREATE documents DIM 384" \
|
||||
"Create a simple document dataset with 384 dimensions"
|
||||
|
||||
# Create a dataset with schema
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE CREATE products DIM 768 SCHEMA category:string price:float available:bool description:string" \
|
||||
"Create products dataset with custom schema"
|
||||
|
||||
# Create an image dataset
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE CREATE images DIM 512 SCHEMA filename:string tags:string width:int height:int" \
|
||||
"Create images dataset for multimodal content"
|
||||
|
||||
# List datasets again
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE LIST" \
|
||||
"List all datasets (should show 3 datasets)"
|
||||
|
||||
# Get info about datasets
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE INFO documents" \
|
||||
"Get information about documents dataset"
|
||||
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE INFO products" \
|
||||
"Get information about products dataset"
|
||||
}
|
||||
|
||||
# Embedding Operations
|
||||
demo_embedding_operations() {
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " EMBEDDING OPERATIONS DEMO"
|
||||
echo "=========================================="
|
||||
|
||||
log_warning "The following operations will fail because no embedding service is running."
|
||||
log_warning "This demonstrates the command structure and error handling."
|
||||
|
||||
# Try to embed text (will fail without embedding service)
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE EMBED.TEXT 'Hello world'" \
|
||||
"Generate embedding for single text" || true
|
||||
|
||||
# Try to embed multiple texts
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE EMBED.TEXT 'Machine learning' 'Artificial intelligence' 'Deep learning'" \
|
||||
"Generate embeddings for multiple texts" || true
|
||||
}
|
||||
|
||||
# Data Storage Operations
|
||||
demo_data_storage() {
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " DATA STORAGE DEMO"
|
||||
echo "=========================================="
|
||||
|
||||
log_warning "Storage operations will fail without embedding service, but show command structure."
|
||||
|
||||
# Store text documents
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE STORE documents TEXT 'Introduction to machine learning algorithms and their applications in modern AI systems' category 'education' author 'John Doe' difficulty 'beginner'" \
|
||||
"Store a document with text and metadata" || true
|
||||
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE STORE documents TEXT 'Deep learning neural networks for computer vision tasks' category 'research' author 'Jane Smith' difficulty 'advanced'" \
|
||||
"Store another document" || true
|
||||
|
||||
# Store product information
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE STORE products TEXT 'High-performance laptop with 16GB RAM and SSD storage' category 'electronics' price '1299.99' available 'true'" \
|
||||
"Store product with text description" || true
|
||||
|
||||
# Store image with metadata (using placeholder base64)
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE STORE images IMAGE 'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8/5+hHgAHggJ/PchI7wAAAABJRU5ErkJggg==' filename 'sample.png' tags 'test,demo' width '1' height '1'" \
|
||||
"Store image with metadata (1x1 pixel PNG)" || true
|
||||
|
||||
# Store multimodal content
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE STORE images TEXT 'Beautiful sunset over mountains' IMAGE 'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8/5+hHgAHggJ/PchI7wAAAABJRU5ErkJggg==' filename 'sunset.png' tags 'nature,landscape' location 'California'" \
|
||||
"Store multimodal content (text + image)" || true
|
||||
}
|
||||
|
||||
# Search Operations
|
||||
demo_search_operations() {
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " SEARCH OPERATIONS DEMO"
|
||||
echo "=========================================="
|
||||
|
||||
log_warning "Search operations will fail without data, but show command structure."
|
||||
|
||||
# Search with raw vector
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE SEARCH documents VECTOR '0.1,0.2,0.3,0.4,0.5' K 5" \
|
||||
"Search with raw vector (5 results)" || true
|
||||
|
||||
# Search with vector and parameters
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE SEARCH documents VECTOR '0.1,0.2,0.3,0.4,0.5' K 10 NPROBES 20 REFINE 2" \
|
||||
"Search with vector and advanced parameters" || true
|
||||
|
||||
# Text-based search
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE SEARCH.TEXT documents 'machine learning algorithms' K 5" \
|
||||
"Search using text query" || true
|
||||
|
||||
# Text search with parameters
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE SEARCH.TEXT products 'laptop computer' K 3 NPROBES 10" \
|
||||
"Search products using text with parameters" || true
|
||||
|
||||
# Search in image dataset
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE SEARCH.TEXT images 'sunset landscape' K 5" \
|
||||
"Search images using text description" || true
|
||||
}
|
||||
|
||||
# Index Management Operations
|
||||
demo_index_management() {
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " INDEX MANAGEMENT DEMO"
|
||||
echo "=========================================="
|
||||
|
||||
# Create indexes for better search performance
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE CREATE.INDEX documents IVF_PQ" \
|
||||
"Create default IVF_PQ index for documents"
|
||||
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE CREATE.INDEX products IVF_PQ PARTITIONS 512 SUBVECTORS 32" \
|
||||
"Create IVF_PQ index with custom parameters for products"
|
||||
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE CREATE.INDEX images IVF_PQ PARTITIONS 256 SUBVECTORS 16" \
|
||||
"Create IVF_PQ index for images dataset"
|
||||
|
||||
log_success "Indexes created successfully"
|
||||
}
|
||||
|
||||
# Advanced Usage Examples
|
||||
demo_advanced_usage() {
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " ADVANCED USAGE EXAMPLES"
|
||||
echo "=========================================="
|
||||
|
||||
# Create a specialized dataset for semantic search
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE CREATE semantic_search DIM 1536 SCHEMA title:string content:string url:string timestamp:string source:string" \
|
||||
"Create dataset for semantic search with rich metadata"
|
||||
|
||||
# Demonstrate batch operations concept
|
||||
log_info "Batch operations example (would store multiple items):"
|
||||
echo " for doc in documents:"
|
||||
echo " LANCE STORE semantic_search TEXT \"\$doc_content\" title \"\$title\" url \"\$url\""
|
||||
|
||||
# Show monitoring commands
|
||||
log_info "Monitoring and maintenance commands:"
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE LIST" \
|
||||
"List all datasets for monitoring"
|
||||
|
||||
# Show dataset statistics
|
||||
for dataset in documents products images semantic_search; do
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE INFO $dataset" \
|
||||
"Get statistics for $dataset" || true
|
||||
done
|
||||
}
|
||||
|
||||
# Cleanup Operations
|
||||
demo_cleanup() {
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " CLEANUP OPERATIONS DEMO"
|
||||
echo "=========================================="
|
||||
|
||||
log_info "Demonstrating cleanup operations..."
|
||||
|
||||
# Drop individual datasets
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE DROP semantic_search" \
|
||||
"Drop semantic_search dataset"
|
||||
|
||||
# List remaining datasets
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE LIST" \
|
||||
"List remaining datasets"
|
||||
|
||||
# Ask user if they want to clean up all test data
|
||||
echo
|
||||
read -p "Do you want to clean up all test datasets? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE DROP documents" \
|
||||
"Drop documents dataset"
|
||||
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE DROP products" \
|
||||
"Drop products dataset"
|
||||
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE DROP images" \
|
||||
"Drop images dataset"
|
||||
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE LIST" \
|
||||
"Verify all datasets are cleaned up"
|
||||
|
||||
log_success "All test datasets cleaned up"
|
||||
else
|
||||
log_info "Keeping test datasets for further experimentation"
|
||||
fi
|
||||
}
|
||||
|
||||
# Error Handling Demo
|
||||
demo_error_handling() {
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " ERROR HANDLING DEMO"
|
||||
echo "=========================================="
|
||||
|
||||
log_info "Demonstrating various error conditions..."
|
||||
|
||||
# Try to access non-existent dataset
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE INFO nonexistent_dataset" \
|
||||
"Try to get info for non-existent dataset" || true
|
||||
|
||||
# Try to search non-existent dataset
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE SEARCH nonexistent_dataset VECTOR '0.1,0.2' K 5" \
|
||||
"Try to search non-existent dataset" || true
|
||||
|
||||
# Try to drop non-existent dataset
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE DROP nonexistent_dataset" \
|
||||
"Try to drop non-existent dataset" || true
|
||||
|
||||
# Try invalid vector format
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE SEARCH documents VECTOR 'invalid,vector,format' K 5" \
|
||||
"Try search with invalid vector format" || true
|
||||
|
||||
log_info "Error handling demonstration complete"
|
||||
}
|
||||
|
||||
# Performance Testing Demo
|
||||
demo_performance_testing() {
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " PERFORMANCE TESTING DEMO"
|
||||
echo "=========================================="
|
||||
|
||||
log_info "Creating performance test dataset..."
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE CREATE perf_test DIM 128 SCHEMA batch_id:string item_id:string" \
|
||||
"Create performance test dataset"
|
||||
|
||||
log_info "Performance testing would involve:"
|
||||
echo " 1. Bulk loading thousands of vectors"
|
||||
echo " 2. Creating indexes with different parameters"
|
||||
echo " 3. Measuring search latency with various K values"
|
||||
echo " 4. Testing different NPROBES settings"
|
||||
echo " 5. Monitoring memory usage"
|
||||
|
||||
log_info "Example performance test commands:"
|
||||
echo " # Test search speed with different parameters"
|
||||
echo " time redis-cli LANCE SEARCH.TEXT perf_test 'query' K 10"
|
||||
echo " time redis-cli LANCE SEARCH.TEXT perf_test 'query' K 10 NPROBES 50"
|
||||
echo " time redis-cli LANCE SEARCH.TEXT perf_test 'query' K 100 NPROBES 100"
|
||||
|
||||
# Clean up performance test dataset
|
||||
execute_command \
|
||||
"$REDIS_CLI LANCE DROP perf_test" \
|
||||
"Clean up performance test dataset"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
echo "=========================================="
|
||||
echo " LANCE VECTOR DATABASE DEMO SCRIPT"
|
||||
echo "=========================================="
|
||||
echo
|
||||
echo "This script demonstrates all Lance vector database operations."
|
||||
echo "Note: Some operations will fail without a running embedding service."
|
||||
echo "This is expected and demonstrates error handling."
|
||||
echo
|
||||
|
||||
# Check prerequisites
|
||||
check_herodb
|
||||
|
||||
# Setup
|
||||
setup_embedding_service
|
||||
|
||||
# Run demos
|
||||
demo_dataset_management
|
||||
demo_embedding_operations
|
||||
demo_data_storage
|
||||
demo_search_operations
|
||||
demo_index_management
|
||||
demo_advanced_usage
|
||||
demo_error_handling
|
||||
demo_performance_testing
|
||||
|
||||
# Cleanup
|
||||
demo_cleanup
|
||||
|
||||
echo
|
||||
echo "=========================================="
|
||||
echo " DEMO COMPLETE"
|
||||
echo "=========================================="
|
||||
echo
|
||||
log_success "Lance vector database demo completed successfully!"
|
||||
echo
|
||||
echo "Next steps:"
|
||||
echo "1. Set up a real embedding service (OpenAI, Hugging Face, etc.)"
|
||||
echo "2. Update the embedding service URL configuration"
|
||||
echo "3. Try storing and searching real data"
|
||||
echo "4. Experiment with different vector dimensions and index parameters"
|
||||
echo "5. Build your AI-powered application!"
|
||||
echo
|
||||
echo "For more information, see docs/lance_vector_db.md"
|
||||
}
|
||||
|
||||
# Run the demo
|
||||
main "$@"
|
@@ -1,239 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# HeroDB Tantivy Search Demo
|
||||
# This script demonstrates full-text search capabilities using Redis commands
|
||||
# HeroDB server should be running on port 6381
|
||||
|
||||
set -e # Exit on any error
|
||||
|
||||
# Configuration
|
||||
REDIS_HOST="localhost"
|
||||
REDIS_PORT="6382"
|
||||
REDIS_CLI="redis-cli -h $REDIS_HOST -p $REDIS_PORT"
|
||||
|
||||
# Start the herodb server in the background
|
||||
echo "Starting herodb server..."
|
||||
cargo run -p herodb -- --dir /tmp/herodbtest --port ${REDIS_PORT} --debug &
|
||||
SERVER_PID=$!
|
||||
echo
|
||||
sleep 2 # Give the server a moment to start
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
BLUE='\033[0;34m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Function to print colored output
|
||||
print_header() {
|
||||
echo -e "${BLUE}=== $1 ===${NC}"
|
||||
}
|
||||
|
||||
print_success() {
|
||||
echo -e "${GREEN}✓ $1${NC}"
|
||||
}
|
||||
|
||||
print_info() {
|
||||
echo -e "${YELLOW}ℹ $1${NC}"
|
||||
}
|
||||
|
||||
print_error() {
|
||||
echo -e "${RED}✗ $1${NC}"
|
||||
}
|
||||
|
||||
# Function to check if HeroDB is running
|
||||
check_herodb() {
|
||||
print_info "Checking if HeroDB is running on port $REDIS_PORT..."
|
||||
if ! $REDIS_CLI ping > /dev/null 2>&1; then
|
||||
print_error "HeroDB is not running on port $REDIS_PORT"
|
||||
print_info "Please start HeroDB with: cargo run -- --port $REDIS_PORT"
|
||||
exit 1
|
||||
fi
|
||||
print_success "HeroDB is running and responding"
|
||||
}
|
||||
|
||||
# Function to execute Redis command with error handling
|
||||
execute_cmd() {
|
||||
local cmd="$1"
|
||||
local description="$2"
|
||||
|
||||
echo -e "${YELLOW}Command:${NC} $cmd"
|
||||
if result=$($REDIS_CLI $cmd 2>&1); then
|
||||
echo -e "${GREEN}Result:${NC} $result"
|
||||
return 0
|
||||
else
|
||||
print_error "Failed: $description"
|
||||
echo "Error: $result"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to pause for readability
|
||||
pause() {
|
||||
echo
|
||||
read -p "Press Enter to continue..."
|
||||
echo
|
||||
}
|
||||
|
||||
# Main demo function
|
||||
main() {
|
||||
clear
|
||||
print_header "HeroDB Tantivy Search Demonstration"
|
||||
echo "This demo shows full-text search capabilities using Redis commands"
|
||||
echo "HeroDB runs on port $REDIS_PORT (instead of Redis default 6379)"
|
||||
echo
|
||||
|
||||
# Check if HeroDB is running
|
||||
check_herodb
|
||||
echo
|
||||
|
||||
print_header "Step 1: Create Search Index"
|
||||
print_info "Creating a product catalog search index with various field types"
|
||||
|
||||
# Create search index with schema
|
||||
execute_cmd "FT.CREATE product_catalog SCHEMA title TEXT description TEXT category TAG price NUMERIC rating NUMERIC location GEO" \
|
||||
"Creating search index"
|
||||
|
||||
print_success "Search index 'product_catalog' created successfully"
|
||||
pause
|
||||
|
||||
print_header "Step 2: Add Sample Products"
|
||||
print_info "Adding sample products to demonstrate different search scenarios"
|
||||
|
||||
# Add sample products using FT.ADD
|
||||
execute_cmd "FT.ADD product_catalog product:1 1.0 title 'Wireless Bluetooth Headphones' description 'Premium noise-canceling headphones with 30-hour battery life' category 'electronics,audio' price 299.99 rating 4.5 location '-122.4194,37.7749'" "Adding product 1"
|
||||
execute_cmd "FT.ADD product_catalog product:2 1.0 title 'Organic Coffee Beans' description 'Single-origin Ethiopian coffee beans, medium roast' category 'food,beverages,organic' price 24.99 rating 4.8 location '-74.0060,40.7128'" "Adding product 2"
|
||||
execute_cmd "FT.ADD product_catalog product:3 1.0 title 'Yoga Mat Premium' description 'Eco-friendly yoga mat with superior grip and cushioning' category 'fitness,wellness,eco-friendly' price 89.99 rating 4.3 location '-118.2437,34.0522'" "Adding product 3"
|
||||
execute_cmd "FT.ADD product_catalog product:4 1.0 title 'Smart Home Speaker' description 'Voice-controlled smart speaker with AI assistant' category 'electronics,smart-home' price 149.99 rating 4.2 location '-87.6298,41.8781'" "Adding product 4"
|
||||
execute_cmd "FT.ADD product_catalog product:5 1.0 title 'Organic Green Tea' description 'Premium organic green tea leaves from Japan' category 'food,beverages,organic,tea' price 18.99 rating 4.7 location '139.6503,35.6762'" "Adding product 5"
|
||||
execute_cmd "FT.ADD product_catalog product:6 1.0 title 'Wireless Gaming Mouse' description 'High-precision gaming mouse with RGB lighting' category 'electronics,gaming' price 79.99 rating 4.4 location '-122.3321,47.6062'" "Adding product 6"
|
||||
execute_cmd "FT.ADD product_catalog product:7 1.0 title 'Comfortable meditation cushion for mindfulness practice' description 'Meditation cushion with premium materials' category 'wellness,meditation' price 45.99 rating 4.6 location '-122.4194,37.7749'" "Adding product 7"
|
||||
execute_cmd "FT.ADD product_catalog product:8 1.0 title 'Bluetooth Earbuds' description 'True wireless earbuds with active noise cancellation' category 'electronics,audio' price 199.99 rating 4.1 location '-74.0060,40.7128'" "Adding product 8"
|
||||
|
||||
print_success "Added 8 products to the index"
|
||||
pause
|
||||
|
||||
print_header "Step 3: Basic Text Search"
|
||||
print_info "Searching for 'wireless' products"
|
||||
|
||||
execute_cmd "FT.SEARCH product_catalog wireless" "Basic text search"
|
||||
pause
|
||||
|
||||
print_header "Step 4: Search with Filters"
|
||||
print_info "Searching for 'organic' products"
|
||||
|
||||
execute_cmd "FT.SEARCH product_catalog organic" "Filtered search"
|
||||
pause
|
||||
|
||||
print_header "Step 5: Numeric Range Search"
|
||||
print_info "Searching for 'premium' products"
|
||||
|
||||
execute_cmd "FT.SEARCH product_catalog premium" "Text search"
|
||||
pause
|
||||
|
||||
print_header "Step 6: Sorting Results"
|
||||
print_info "Searching for electronics"
|
||||
|
||||
execute_cmd "FT.SEARCH product_catalog electronics" "Category search"
|
||||
pause
|
||||
|
||||
print_header "Step 7: Limiting Results"
|
||||
print_info "Searching for wireless products with limit"
|
||||
|
||||
execute_cmd "FT.SEARCH product_catalog wireless LIMIT 0 3" "Limited results"
|
||||
pause
|
||||
|
||||
print_header "Step 8: Complex Query"
|
||||
print_info "Finding audio products with noise cancellation"
|
||||
|
||||
execute_cmd "FT.SEARCH product_catalog 'noise cancellation'" "Complex query"
|
||||
pause
|
||||
|
||||
print_header "Step 9: Geographic Search"
|
||||
print_info "Searching for meditation products"
|
||||
|
||||
execute_cmd "FT.SEARCH product_catalog meditation" "Text search"
|
||||
pause
|
||||
|
||||
print_header "Step 10: Aggregation Example"
|
||||
print_info "Getting index information and statistics"
|
||||
|
||||
execute_cmd "FT.INFO product_catalog" "Index information"
|
||||
pause
|
||||
|
||||
print_header "Step 11: Search Comparison"
|
||||
print_info "Comparing Tantivy search vs simple key matching"
|
||||
|
||||
echo -e "${YELLOW}Tantivy Full-Text Search:${NC}"
|
||||
execute_cmd "FT.SEARCH product_catalog 'battery life'" "Full-text search for 'battery life'"
|
||||
|
||||
echo
|
||||
echo -e "${YELLOW}Simple Key Pattern Matching:${NC}"
|
||||
execute_cmd "KEYS *battery*" "Simple pattern matching for 'battery'"
|
||||
|
||||
print_info "Notice how full-text search finds relevant results even when exact words don't match keys"
|
||||
pause
|
||||
|
||||
print_header "Step 12: Fuzzy Search"
|
||||
print_info "Searching for headphones"
|
||||
|
||||
execute_cmd "FT.SEARCH product_catalog headphones" "Text search"
|
||||
pause
|
||||
|
||||
print_header "Step 13: Phrase Search"
|
||||
print_info "Searching for coffee products"
|
||||
|
||||
execute_cmd "FT.SEARCH product_catalog coffee" "Text search"
|
||||
pause
|
||||
|
||||
print_header "Step 14: Boolean Queries"
|
||||
print_info "Searching for gaming products"
|
||||
|
||||
execute_cmd "FT.SEARCH product_catalog gaming" "Text search"
|
||||
echo
|
||||
execute_cmd "FT.SEARCH product_catalog tea" "Text search"
|
||||
pause
|
||||
|
||||
print_header "Step 15: Cleanup"
|
||||
print_info "Removing test data"
|
||||
|
||||
# Delete the search index
|
||||
execute_cmd "FT.DROP product_catalog" "Dropping search index"
|
||||
|
||||
# Clean up documents from search index
|
||||
for i in {1..8}; do
|
||||
execute_cmd "FT.DEL product_catalog product:$i" "Deleting product:$i from index"
|
||||
done
|
||||
|
||||
print_success "Cleanup completed"
|
||||
echo
|
||||
|
||||
print_header "Demo Summary"
|
||||
echo "This demonstration showed:"
|
||||
echo "• Creating search indexes with different field types"
|
||||
echo "• Adding documents to the search index"
|
||||
echo "• Basic and advanced text search queries"
|
||||
echo "• Filtering by categories and numeric ranges"
|
||||
echo "• Sorting and limiting results"
|
||||
echo "• Geographic searches"
|
||||
echo "• Fuzzy matching and phrase searches"
|
||||
echo "• Boolean query operators"
|
||||
echo "• Comparison with simple pattern matching"
|
||||
echo
|
||||
print_success "HeroDB Tantivy search demo completed successfully!"
|
||||
echo
|
||||
print_info "Key advantages of Tantivy full-text search:"
|
||||
echo " - Relevance scoring and ranking"
|
||||
echo " - Fuzzy matching and typo tolerance"
|
||||
echo " - Complex boolean queries"
|
||||
echo " - Field-specific searches and filters"
|
||||
echo " - Geographic and numeric range queries"
|
||||
echo " - Much faster than pattern matching on large datasets"
|
||||
echo
|
||||
print_info "To run HeroDB server: cargo run -- --port 6381"
|
||||
print_info "To connect with redis-cli: redis-cli -h localhost -p 6381"
|
||||
}
|
||||
|
||||
# Run the demo
|
||||
main "$@"
|
@@ -1,101 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Simple Tantivy Search Integration Test for HeroDB
|
||||
# This script tests the full-text search functionality we just integrated
|
||||
|
||||
set -e
|
||||
|
||||
echo "🔍 Testing Tantivy Search Integration..."
|
||||
|
||||
# Build the project first
|
||||
echo "📦 Building HeroDB..."
|
||||
cargo build --release
|
||||
|
||||
# Start the server in the background
|
||||
echo "🚀 Starting HeroDB server on port 6379..."
|
||||
cargo run --release -- --port 6379 --dir ./test_data &
|
||||
SERVER_PID=$!
|
||||
|
||||
# Wait for server to start
|
||||
sleep 3
|
||||
|
||||
# Function to cleanup on exit
|
||||
cleanup() {
|
||||
echo "🧹 Cleaning up..."
|
||||
kill $SERVER_PID 2>/dev/null || true
|
||||
rm -rf ./test_data
|
||||
exit
|
||||
}
|
||||
|
||||
# Set trap for cleanup
|
||||
trap cleanup EXIT INT TERM
|
||||
|
||||
# Function to execute Redis command
|
||||
execute_cmd() {
|
||||
local cmd="$1"
|
||||
local description="$2"
|
||||
|
||||
echo "📝 $description"
|
||||
echo " Command: $cmd"
|
||||
|
||||
if result=$(redis-cli -p 6379 $cmd 2>&1); then
|
||||
echo " ✅ Result: $result"
|
||||
echo
|
||||
return 0
|
||||
else
|
||||
echo " ❌ Failed: $result"
|
||||
echo
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
echo "🧪 Running Tantivy Search Tests..."
|
||||
echo
|
||||
|
||||
# Test 1: Create a search index
|
||||
execute_cmd "ft.create books SCHEMA title TEXT description TEXT author TEXT category TAG price NUMERIC" \
|
||||
"Creating search index 'books'"
|
||||
|
||||
# Test 2: Add documents to the index
|
||||
execute_cmd "ft.add books book1 1.0 title \"The Great Gatsby\" description \"A classic American novel about the Jazz Age\" author \"F. Scott Fitzgerald\" category \"fiction,classic\" price \"12.99\"" \
|
||||
"Adding first book"
|
||||
|
||||
execute_cmd "ft.add books book2 1.0 title \"To Kill a Mockingbird\" description \"A novel about racial injustice in the American South\" author \"Harper Lee\" category \"fiction,classic\" price \"14.99\"" \
|
||||
"Adding second book"
|
||||
|
||||
execute_cmd "ft.add books book3 1.0 title \"Programming Rust\" description \"A comprehensive guide to Rust programming language\" author \"Jim Blandy\" category \"programming,technical\" price \"49.99\"" \
|
||||
"Adding third book"
|
||||
|
||||
execute_cmd "ft.add books book4 1.0 title \"The Rust Programming Language\" description \"The official book on Rust programming\" author \"Steve Klabnik\" category \"programming,technical\" price \"39.99\"" \
|
||||
"Adding fourth book"
|
||||
|
||||
# Test 3: Basic search
|
||||
execute_cmd "ft.search books Rust" \
|
||||
"Searching for 'Rust'"
|
||||
|
||||
# Test 4: Search with filters
|
||||
execute_cmd "ft.search books programming FILTER category programming" \
|
||||
"Searching for 'programming' with category filter"
|
||||
|
||||
# Test 5: Search with limit
|
||||
execute_cmd "ft.search books \"*\" LIMIT 0 2" \
|
||||
"Getting first 2 documents"
|
||||
|
||||
# Test 6: Get index info
|
||||
execute_cmd "ft.info books" \
|
||||
"Getting index information"
|
||||
|
||||
# Test 7: Delete a document
|
||||
execute_cmd "ft.del books book1" \
|
||||
"Deleting book1"
|
||||
|
||||
# Test 8: Search again to verify deletion
|
||||
execute_cmd "ft.search books Gatsby" \
|
||||
"Searching for deleted book"
|
||||
|
||||
# Test 9: Drop the index
|
||||
execute_cmd "ft.drop books" \
|
||||
"Dropping the index"
|
||||
|
||||
echo "🎉 All tests completed successfully!"
|
||||
echo "✅ Tantivy search integration is working correctly"
|
724
src/cmd.rs
724
src/cmd.rs
@@ -1,7 +1,8 @@
|
||||
use crate::{error::DBError, protocol::Protocol, server::Server, search_cmd};
|
||||
use crate::{error::DBError, protocol::Protocol, server::Server};
|
||||
use tokio::time::{timeout, Duration};
|
||||
use futures::future::select_all;
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
use base64::Engine;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum Cmd {
|
||||
@@ -86,39 +87,47 @@ pub enum Cmd {
|
||||
AgeVerifyName(String, String, String), // name, message, signature_b64
|
||||
AgeList,
|
||||
|
||||
// Full-text search commands with schema support
|
||||
FtCreate {
|
||||
index_name: String,
|
||||
schema: Vec<(String, String, Vec<String>)>, // (field_name, field_type, options)
|
||||
// Lance vector database commands
|
||||
LanceCreate {
|
||||
dataset: String,
|
||||
dim: usize,
|
||||
schema: Vec<(String, String)>, // field_name, field_type pairs
|
||||
},
|
||||
FtAdd {
|
||||
index_name: String,
|
||||
doc_id: String,
|
||||
score: f64,
|
||||
fields: std::collections::HashMap<String, String>,
|
||||
LanceStore {
|
||||
dataset: String,
|
||||
text: Option<String>,
|
||||
image_base64: Option<String>,
|
||||
metadata: std::collections::HashMap<String, String>,
|
||||
},
|
||||
FtSearch {
|
||||
index_name: String,
|
||||
query: String,
|
||||
filters: Vec<(String, String)>, // field, value pairs
|
||||
limit: Option<usize>,
|
||||
offset: Option<usize>,
|
||||
return_fields: Option<Vec<String>>,
|
||||
LanceSearch {
|
||||
dataset: String,
|
||||
vector: Vec<f32>,
|
||||
k: usize,
|
||||
nprobes: Option<usize>,
|
||||
refine_factor: Option<usize>,
|
||||
},
|
||||
FtDel(String, String), // index_name, doc_id
|
||||
FtInfo(String), // index_name
|
||||
FtDrop(String), // index_name
|
||||
FtAlter {
|
||||
index_name: String,
|
||||
field_name: String,
|
||||
field_type: String,
|
||||
options: Vec<String>,
|
||||
LanceSearchText {
|
||||
dataset: String,
|
||||
query_text: String,
|
||||
k: usize,
|
||||
nprobes: Option<usize>,
|
||||
refine_factor: Option<usize>,
|
||||
},
|
||||
FtAggregate {
|
||||
index_name: String,
|
||||
query: String,
|
||||
group_by: Vec<String>,
|
||||
reducers: Vec<String>,
|
||||
LanceEmbedText {
|
||||
texts: Vec<String>,
|
||||
},
|
||||
LanceCreateIndex {
|
||||
dataset: String,
|
||||
index_type: String,
|
||||
num_partitions: Option<usize>,
|
||||
num_sub_vectors: Option<usize>,
|
||||
},
|
||||
LanceList,
|
||||
LanceDrop {
|
||||
dataset: String,
|
||||
},
|
||||
LanceInfo {
|
||||
dataset: String,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -652,148 +661,237 @@ impl Cmd {
|
||||
_ => return Err(DBError(format!("unsupported AGE subcommand {:?}", cmd))),
|
||||
}
|
||||
}
|
||||
"ft.create" => {
|
||||
if cmd.len() < 4 || cmd[2].to_uppercase() != "SCHEMA" {
|
||||
return Err(DBError("ERR FT.CREATE requires: indexname SCHEMA field1 type1 [options] ...".to_string()));
|
||||
}
|
||||
|
||||
let index_name = cmd[1].clone();
|
||||
let mut schema = Vec::new();
|
||||
let mut i = 3;
|
||||
|
||||
while i < cmd.len() {
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("ERR incomplete field definition".to_string()));
|
||||
}
|
||||
|
||||
let field_name = cmd[i].clone();
|
||||
let field_type = cmd[i + 1].to_uppercase();
|
||||
let mut options = Vec::new();
|
||||
i += 2;
|
||||
|
||||
// Parse field options until we hit another field name or end
|
||||
while i < cmd.len() && !["TEXT", "NUMERIC", "TAG", "GEO"].contains(&cmd[i].to_uppercase().as_str()) {
|
||||
options.push(cmd[i].to_uppercase());
|
||||
i += 1;
|
||||
|
||||
// If this option takes a value, consume it too
|
||||
if i > 0 && ["SEPARATOR", "WEIGHT"].contains(&cmd[i-1].to_uppercase().as_str()) && i < cmd.len() {
|
||||
options.push(cmd[i].clone());
|
||||
i += 1;
|
||||
}
|
||||
}
|
||||
|
||||
schema.push((field_name, field_type, options));
|
||||
}
|
||||
|
||||
Cmd::FtCreate {
|
||||
index_name,
|
||||
schema,
|
||||
}
|
||||
}
|
||||
"ft.add" => {
|
||||
if cmd.len() < 5 {
|
||||
return Err(DBError("ERR FT.ADD requires: index_name doc_id score field value ...".to_string()));
|
||||
}
|
||||
|
||||
let index_name = cmd[1].clone();
|
||||
let doc_id = cmd[2].clone();
|
||||
let score = cmd[3].parse::<f64>()
|
||||
.map_err(|_| DBError("ERR score must be a number".to_string()))?;
|
||||
|
||||
let mut fields = HashMap::new();
|
||||
let mut i = 4;
|
||||
|
||||
while i + 1 < cmd.len() {
|
||||
fields.insert(cmd[i].clone(), cmd[i + 1].clone());
|
||||
i += 2;
|
||||
}
|
||||
|
||||
Cmd::FtAdd {
|
||||
index_name,
|
||||
doc_id,
|
||||
score,
|
||||
fields,
|
||||
}
|
||||
}
|
||||
"ft.search" => {
|
||||
if cmd.len() < 3 {
|
||||
return Err(DBError("ERR FT.SEARCH requires: index_name query [options]".to_string()));
|
||||
}
|
||||
|
||||
let index_name = cmd[1].clone();
|
||||
let query = cmd[2].clone();
|
||||
|
||||
let mut filters = Vec::new();
|
||||
let mut limit = None;
|
||||
let mut offset = None;
|
||||
let mut return_fields = None;
|
||||
|
||||
let mut i = 3;
|
||||
while i < cmd.len() {
|
||||
match cmd[i].to_uppercase().as_str() {
|
||||
"FILTER" => {
|
||||
if i + 3 >= cmd.len() {
|
||||
return Err(DBError("ERR FILTER requires field and value".to_string()));
|
||||
}
|
||||
filters.push((cmd[i + 1].clone(), cmd[i + 2].clone()));
|
||||
i += 3;
|
||||
}
|
||||
"LIMIT" => {
|
||||
if i + 2 >= cmd.len() {
|
||||
return Err(DBError("ERR LIMIT requires offset and num".to_string()));
|
||||
}
|
||||
offset = Some(cmd[i + 1].parse().unwrap_or(0));
|
||||
limit = Some(cmd[i + 2].parse().unwrap_or(10));
|
||||
i += 3;
|
||||
}
|
||||
"RETURN" => {
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("ERR RETURN requires field count".to_string()));
|
||||
}
|
||||
let count: usize = cmd[i + 1].parse().unwrap_or(0);
|
||||
i += 2;
|
||||
|
||||
let mut fields = Vec::new();
|
||||
for _ in 0..count {
|
||||
if i < cmd.len() {
|
||||
fields.push(cmd[i].clone());
|
||||
i += 1;
|
||||
}
|
||||
}
|
||||
return_fields = Some(fields);
|
||||
}
|
||||
_ => i += 1,
|
||||
}
|
||||
}
|
||||
|
||||
Cmd::FtSearch {
|
||||
index_name,
|
||||
query,
|
||||
filters,
|
||||
limit,
|
||||
offset,
|
||||
return_fields,
|
||||
}
|
||||
}
|
||||
"ft.del" => {
|
||||
if cmd.len() != 3 {
|
||||
return Err(DBError("ERR FT.DEL requires: index_name doc_id".to_string()));
|
||||
}
|
||||
Cmd::FtDel(cmd[1].clone(), cmd[2].clone())
|
||||
}
|
||||
"ft.info" => {
|
||||
if cmd.len() != 2 {
|
||||
return Err(DBError("ERR FT.INFO requires: index_name".to_string()));
|
||||
}
|
||||
Cmd::FtInfo(cmd[1].clone())
|
||||
}
|
||||
"ft.drop" => {
|
||||
if cmd.len() != 2 {
|
||||
return Err(DBError("ERR FT.DROP requires: index_name".to_string()));
|
||||
}
|
||||
Cmd::FtDrop(cmd[1].clone())
|
||||
}
|
||||
"lance" => {
|
||||
if cmd.len() < 2 {
|
||||
return Err(DBError("wrong number of arguments for LANCE".to_string()));
|
||||
}
|
||||
match cmd[1].to_lowercase().as_str() {
|
||||
"create" => {
|
||||
if cmd.len() < 4 {
|
||||
return Err(DBError("LANCE CREATE <dataset> DIM <dimension> [SCHEMA field:type ...]".to_string()));
|
||||
}
|
||||
let dataset = cmd[2].clone();
|
||||
|
||||
// Parse DIM parameter
|
||||
if cmd[3].to_lowercase() != "dim" {
|
||||
return Err(DBError("Expected DIM after dataset name".to_string()));
|
||||
}
|
||||
if cmd.len() < 5 {
|
||||
return Err(DBError("Missing dimension value".to_string()));
|
||||
}
|
||||
let dim = cmd[4].parse::<usize>().map_err(|_| DBError("Invalid dimension value".to_string()))?;
|
||||
|
||||
// Parse optional SCHEMA
|
||||
let mut schema = Vec::new();
|
||||
let mut i = 5;
|
||||
if i < cmd.len() && cmd[i].to_lowercase() == "schema" {
|
||||
i += 1;
|
||||
while i < cmd.len() {
|
||||
let field_spec = &cmd[i];
|
||||
let parts: Vec<&str> = field_spec.split(':').collect();
|
||||
if parts.len() != 2 {
|
||||
return Err(DBError("Schema fields must be in format field:type".to_string()));
|
||||
}
|
||||
schema.push((parts[0].to_string(), parts[1].to_string()));
|
||||
i += 1;
|
||||
}
|
||||
}
|
||||
|
||||
Cmd::LanceCreate { dataset, dim, schema }
|
||||
}
|
||||
"store" => {
|
||||
if cmd.len() < 3 {
|
||||
return Err(DBError("LANCE STORE <dataset> [TEXT <text>] [IMAGE <base64>] [metadata...]".to_string()));
|
||||
}
|
||||
let dataset = cmd[2].clone();
|
||||
let mut text = None;
|
||||
let mut image_base64 = None;
|
||||
let mut metadata = std::collections::HashMap::new();
|
||||
|
||||
let mut i = 3;
|
||||
while i < cmd.len() {
|
||||
match cmd[i].to_lowercase().as_str() {
|
||||
"text" => {
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("TEXT requires a value".to_string()));
|
||||
}
|
||||
text = Some(cmd[i + 1].clone());
|
||||
i += 2;
|
||||
}
|
||||
"image" => {
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("IMAGE requires a base64 value".to_string()));
|
||||
}
|
||||
image_base64 = Some(cmd[i + 1].clone());
|
||||
i += 2;
|
||||
}
|
||||
_ => {
|
||||
// Parse as metadata key:value
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("Metadata requires key value pairs".to_string()));
|
||||
}
|
||||
metadata.insert(cmd[i].clone(), cmd[i + 1].clone());
|
||||
i += 2;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Cmd::LanceStore { dataset, text, image_base64, metadata }
|
||||
}
|
||||
"search" => {
|
||||
if cmd.len() < 5 {
|
||||
return Err(DBError("LANCE SEARCH <dataset> VECTOR <vector> K <k> [NPROBES <n>] [REFINE <r>]".to_string()));
|
||||
}
|
||||
let dataset = cmd[2].clone();
|
||||
|
||||
if cmd[3].to_lowercase() != "vector" {
|
||||
return Err(DBError("Expected VECTOR after dataset name".to_string()));
|
||||
}
|
||||
|
||||
// Parse vector - expect comma-separated floats in brackets or just comma-separated
|
||||
let vector_str = &cmd[4];
|
||||
let vector_str = vector_str.trim_start_matches('[').trim_end_matches(']');
|
||||
let vector: Result<Vec<f32>, _> = vector_str
|
||||
.split(',')
|
||||
.map(|s| s.trim().parse::<f32>())
|
||||
.collect();
|
||||
let vector = vector.map_err(|_| DBError("Invalid vector format".to_string()))?;
|
||||
|
||||
if cmd.len() < 7 || cmd[5].to_lowercase() != "k" {
|
||||
return Err(DBError("Expected K after vector".to_string()));
|
||||
}
|
||||
let k = cmd[6].parse::<usize>().map_err(|_| DBError("Invalid K value".to_string()))?;
|
||||
|
||||
let mut nprobes = None;
|
||||
let mut refine_factor = None;
|
||||
let mut i = 7;
|
||||
while i < cmd.len() {
|
||||
match cmd[i].to_lowercase().as_str() {
|
||||
"nprobes" => {
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("NPROBES requires a value".to_string()));
|
||||
}
|
||||
nprobes = Some(cmd[i + 1].parse::<usize>().map_err(|_| DBError("Invalid NPROBES value".to_string()))?);
|
||||
i += 2;
|
||||
}
|
||||
"refine" => {
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("REFINE requires a value".to_string()));
|
||||
}
|
||||
refine_factor = Some(cmd[i + 1].parse::<usize>().map_err(|_| DBError("Invalid REFINE value".to_string()))?);
|
||||
i += 2;
|
||||
}
|
||||
_ => {
|
||||
return Err(DBError(format!("Unknown parameter: {}", cmd[i])));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Cmd::LanceSearch { dataset, vector, k, nprobes, refine_factor }
|
||||
}
|
||||
"search.text" => {
|
||||
if cmd.len() < 6 {
|
||||
return Err(DBError("LANCE SEARCH.TEXT <dataset> <query_text> K <k> [NPROBES <n>] [REFINE <r>]".to_string()));
|
||||
}
|
||||
let dataset = cmd[2].clone();
|
||||
let query_text = cmd[3].clone();
|
||||
|
||||
if cmd[4].to_lowercase() != "k" {
|
||||
return Err(DBError("Expected K after query text".to_string()));
|
||||
}
|
||||
let k = cmd[5].parse::<usize>().map_err(|_| DBError("Invalid K value".to_string()))?;
|
||||
|
||||
let mut nprobes = None;
|
||||
let mut refine_factor = None;
|
||||
let mut i = 6;
|
||||
while i < cmd.len() {
|
||||
match cmd[i].to_lowercase().as_str() {
|
||||
"nprobes" => {
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("NPROBES requires a value".to_string()));
|
||||
}
|
||||
nprobes = Some(cmd[i + 1].parse::<usize>().map_err(|_| DBError("Invalid NPROBES value".to_string()))?);
|
||||
i += 2;
|
||||
}
|
||||
"refine" => {
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("REFINE requires a value".to_string()));
|
||||
}
|
||||
refine_factor = Some(cmd[i + 1].parse::<usize>().map_err(|_| DBError("Invalid REFINE value".to_string()))?);
|
||||
i += 2;
|
||||
}
|
||||
_ => {
|
||||
return Err(DBError(format!("Unknown parameter: {}", cmd[i])));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Cmd::LanceSearchText { dataset, query_text, k, nprobes, refine_factor }
|
||||
}
|
||||
"embed.text" => {
|
||||
if cmd.len() < 3 {
|
||||
return Err(DBError("LANCE EMBED.TEXT <text1> [text2] ...".to_string()));
|
||||
}
|
||||
let texts = cmd[2..].to_vec();
|
||||
Cmd::LanceEmbedText { texts }
|
||||
}
|
||||
"create.index" => {
|
||||
if cmd.len() < 5 {
|
||||
return Err(DBError("LANCE CREATE.INDEX <dataset> <index_type> [PARTITIONS <n>] [SUBVECTORS <n>]".to_string()));
|
||||
}
|
||||
let dataset = cmd[2].clone();
|
||||
let index_type = cmd[3].clone();
|
||||
|
||||
let mut num_partitions = None;
|
||||
let mut num_sub_vectors = None;
|
||||
let mut i = 4;
|
||||
while i < cmd.len() {
|
||||
match cmd[i].to_lowercase().as_str() {
|
||||
"partitions" => {
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("PARTITIONS requires a value".to_string()));
|
||||
}
|
||||
num_partitions = Some(cmd[i + 1].parse::<usize>().map_err(|_| DBError("Invalid PARTITIONS value".to_string()))?);
|
||||
i += 2;
|
||||
}
|
||||
"subvectors" => {
|
||||
if i + 1 >= cmd.len() {
|
||||
return Err(DBError("SUBVECTORS requires a value".to_string()));
|
||||
}
|
||||
num_sub_vectors = Some(cmd[i + 1].parse::<usize>().map_err(|_| DBError("Invalid SUBVECTORS value".to_string()))?);
|
||||
i += 2;
|
||||
}
|
||||
_ => {
|
||||
return Err(DBError(format!("Unknown parameter: {}", cmd[i])));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Cmd::LanceCreateIndex { dataset, index_type, num_partitions, num_sub_vectors }
|
||||
}
|
||||
"list" => {
|
||||
if cmd.len() != 2 {
|
||||
return Err(DBError("LANCE LIST takes no arguments".to_string()));
|
||||
}
|
||||
Cmd::LanceList
|
||||
}
|
||||
"drop" => {
|
||||
if cmd.len() != 3 {
|
||||
return Err(DBError("LANCE DROP <dataset>".to_string()));
|
||||
}
|
||||
let dataset = cmd[2].clone();
|
||||
Cmd::LanceDrop { dataset }
|
||||
}
|
||||
"info" => {
|
||||
if cmd.len() != 3 {
|
||||
return Err(DBError("LANCE INFO <dataset>".to_string()));
|
||||
}
|
||||
let dataset = cmd[2].clone();
|
||||
Cmd::LanceInfo { dataset }
|
||||
}
|
||||
_ => return Err(DBError(format!("unsupported LANCE subcommand {:?}", cmd))),
|
||||
}
|
||||
}
|
||||
_ => Cmd::Unknow(cmd[0].clone()),
|
||||
},
|
||||
protocol,
|
||||
@@ -908,34 +1006,18 @@ impl Cmd {
|
||||
Cmd::AgeSignName(name, message) => Ok(crate::age::cmd_age_sign_name(server, &name, &message).await),
|
||||
Cmd::AgeVerifyName(name, message, sig_b64) => Ok(crate::age::cmd_age_verify_name(server, &name, &message, &sig_b64).await),
|
||||
Cmd::AgeList => Ok(crate::age::cmd_age_list(server).await),
|
||||
|
||||
// Full-text search commands
|
||||
Cmd::FtCreate { index_name, schema } => {
|
||||
search_cmd::ft_create_cmd(server, index_name, schema).await
|
||||
}
|
||||
Cmd::FtAdd { index_name, doc_id, score, fields } => {
|
||||
search_cmd::ft_add_cmd(server, index_name, doc_id, score, fields).await
|
||||
}
|
||||
Cmd::FtSearch { index_name, query, filters, limit, offset, return_fields } => {
|
||||
search_cmd::ft_search_cmd(server, index_name, query, filters, limit, offset, return_fields).await
|
||||
}
|
||||
Cmd::FtDel(index_name, doc_id) => {
|
||||
search_cmd::ft_del_cmd(server, index_name, doc_id).await
|
||||
}
|
||||
Cmd::FtInfo(index_name) => {
|
||||
search_cmd::ft_info_cmd(server, index_name).await
|
||||
}
|
||||
Cmd::FtDrop(index_name) => {
|
||||
search_cmd::ft_drop_cmd(server, index_name).await
|
||||
}
|
||||
Cmd::FtAlter { .. } => {
|
||||
// Not implemented yet
|
||||
Ok(Protocol::err("FT.ALTER not implemented yet"))
|
||||
}
|
||||
Cmd::FtAggregate { .. } => {
|
||||
// Not implemented yet
|
||||
Ok(Protocol::err("FT.AGGREGATE not implemented yet"))
|
||||
}
|
||||
|
||||
// Lance vector database commands
|
||||
Cmd::LanceCreate { dataset, dim, schema } => lance_create_cmd(server, &dataset, dim, &schema).await,
|
||||
Cmd::LanceStore { dataset, text, image_base64, metadata } => lance_store_cmd(server, &dataset, text.as_deref(), image_base64.as_deref(), &metadata).await,
|
||||
Cmd::LanceSearch { dataset, vector, k, nprobes, refine_factor } => lance_search_cmd(server, &dataset, &vector, k, nprobes, refine_factor).await,
|
||||
Cmd::LanceSearchText { dataset, query_text, k, nprobes, refine_factor } => lance_search_text_cmd(server, &dataset, &query_text, k, nprobes, refine_factor).await,
|
||||
Cmd::LanceEmbedText { texts } => lance_embed_text_cmd(server, &texts).await,
|
||||
Cmd::LanceCreateIndex { dataset, index_type, num_partitions, num_sub_vectors } => lance_create_index_cmd(server, &dataset, &index_type, num_partitions, num_sub_vectors).await,
|
||||
Cmd::LanceList => lance_list_cmd(server).await,
|
||||
Cmd::LanceDrop { dataset } => lance_drop_cmd(server, &dataset).await,
|
||||
Cmd::LanceInfo { dataset } => lance_info_cmd(server, &dataset).await,
|
||||
|
||||
Cmd::Unknow(s) => Ok(Protocol::err(&format!("ERR unknown command `{}`", s))),
|
||||
}
|
||||
}
|
||||
@@ -1719,3 +1801,243 @@ fn command_cmd(args: &[String]) -> Result<Protocol, DBError> {
|
||||
_ => Ok(Protocol::Array(vec![])),
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to create Arrow schema from field specifications
|
||||
fn create_schema_from_fields(dim: usize, fields: &[(String, String)]) -> arrow::datatypes::Schema {
|
||||
let mut schema_fields = Vec::new();
|
||||
|
||||
// Always add the vector field first
|
||||
let vector_field = arrow::datatypes::Field::new(
|
||||
"vector",
|
||||
arrow::datatypes::DataType::FixedSizeList(
|
||||
Arc::new(arrow::datatypes::Field::new("item", arrow::datatypes::DataType::Float32, true)),
|
||||
dim as i32
|
||||
),
|
||||
false
|
||||
);
|
||||
schema_fields.push(vector_field);
|
||||
|
||||
// Add custom fields
|
||||
for (name, field_type) in fields {
|
||||
let data_type = match field_type.to_lowercase().as_str() {
|
||||
"string" | "text" => arrow::datatypes::DataType::Utf8,
|
||||
"int" | "integer" => arrow::datatypes::DataType::Int64,
|
||||
"float" => arrow::datatypes::DataType::Float64,
|
||||
"bool" | "boolean" => arrow::datatypes::DataType::Boolean,
|
||||
_ => arrow::datatypes::DataType::Utf8, // Default to string
|
||||
};
|
||||
schema_fields.push(arrow::datatypes::Field::new(name, data_type, true));
|
||||
}
|
||||
|
||||
arrow::datatypes::Schema::new(schema_fields)
|
||||
}
|
||||
|
||||
// Lance vector database command implementations
|
||||
async fn lance_create_cmd(
|
||||
server: &Server,
|
||||
dataset: &str,
|
||||
dim: usize,
|
||||
schema: &[(String, String)],
|
||||
) -> Result<Protocol, DBError> {
|
||||
match server.lance_store() {
|
||||
Ok(lance_store) => {
|
||||
match lance_store.create_dataset(dataset, create_schema_from_fields(dim, schema)).await {
|
||||
Ok(_) => Ok(Protocol::SimpleString("OK".to_string())),
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR {}", e)))),
|
||||
}
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR Lance store not available: {}", e)))),
|
||||
}
|
||||
}
|
||||
|
||||
async fn lance_store_cmd(
|
||||
server: &Server,
|
||||
dataset: &str,
|
||||
text: Option<&str>,
|
||||
image_base64: Option<&str>,
|
||||
metadata: &std::collections::HashMap<String, String>,
|
||||
) -> Result<Protocol, DBError> {
|
||||
match server.lance_store() {
|
||||
Ok(lance_store) => {
|
||||
match lance_store.store_multimodal(server, dataset, text.map(|s| s.to_string()),
|
||||
image_base64.and_then(|s| base64::engine::general_purpose::STANDARD.decode(s).ok()),
|
||||
metadata.clone()).await {
|
||||
Ok(id) => Ok(Protocol::BulkString(id)),
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR {}", e)))),
|
||||
}
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR Lance store not available: {}", e)))),
|
||||
}
|
||||
}
|
||||
|
||||
async fn lance_search_cmd(
|
||||
server: &Server,
|
||||
dataset: &str,
|
||||
vector: &[f32],
|
||||
k: usize,
|
||||
nprobes: Option<usize>,
|
||||
refine_factor: Option<usize>,
|
||||
) -> Result<Protocol, DBError> {
|
||||
match server.lance_store() {
|
||||
Ok(lance_store) => {
|
||||
match lance_store.search_vectors(dataset, vector.to_vec(), k, nprobes, refine_factor).await {
|
||||
Ok(results) => {
|
||||
let mut response = Vec::new();
|
||||
for (distance, metadata) in results {
|
||||
let mut item = Vec::new();
|
||||
item.push(Protocol::BulkString("distance".to_string()));
|
||||
item.push(Protocol::BulkString(distance.to_string()));
|
||||
for (key, value) in metadata {
|
||||
item.push(Protocol::BulkString(key));
|
||||
item.push(Protocol::BulkString(value));
|
||||
}
|
||||
response.push(Protocol::Array(item));
|
||||
}
|
||||
Ok(Protocol::Array(response))
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR {}", e)))),
|
||||
}
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR Lance store not available: {}", e)))),
|
||||
}
|
||||
}
|
||||
|
||||
async fn lance_search_text_cmd(
|
||||
server: &Server,
|
||||
dataset: &str,
|
||||
query_text: &str,
|
||||
k: usize,
|
||||
nprobes: Option<usize>,
|
||||
refine_factor: Option<usize>,
|
||||
) -> Result<Protocol, DBError> {
|
||||
match server.lance_store() {
|
||||
Ok(lance_store) => {
|
||||
match lance_store.search_with_text(server, dataset, query_text.to_string(), k, nprobes, refine_factor).await {
|
||||
Ok(results) => {
|
||||
let mut response = Vec::new();
|
||||
for (distance, metadata) in results {
|
||||
let mut item = Vec::new();
|
||||
item.push(Protocol::BulkString("distance".to_string()));
|
||||
item.push(Protocol::BulkString(distance.to_string()));
|
||||
for (key, value) in metadata {
|
||||
item.push(Protocol::BulkString(key));
|
||||
item.push(Protocol::BulkString(value));
|
||||
}
|
||||
response.push(Protocol::Array(item));
|
||||
}
|
||||
Ok(Protocol::Array(response))
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR {}", e)))),
|
||||
}
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR Lance store not available: {}", e)))),
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to sanitize error messages for Redis protocol
|
||||
fn sanitize_error_message(msg: &str) -> String {
|
||||
// Remove newlines, carriage returns, and limit length
|
||||
let sanitized = msg
|
||||
.replace('\n', " ")
|
||||
.replace('\r', " ")
|
||||
.replace('\t', " ");
|
||||
|
||||
// Limit to 200 characters to avoid overly long error messages
|
||||
if sanitized.len() > 200 {
|
||||
format!("{}...", &sanitized[..197])
|
||||
} else {
|
||||
sanitized
|
||||
}
|
||||
}
|
||||
|
||||
async fn lance_embed_text_cmd(
|
||||
server: &Server,
|
||||
texts: &[String],
|
||||
) -> Result<Protocol, DBError> {
|
||||
match server.lance_store() {
|
||||
Ok(lance_store) => {
|
||||
match lance_store.embed_text(server, texts.to_vec()).await {
|
||||
Ok(embeddings) => {
|
||||
let mut response = Vec::new();
|
||||
for embedding in embeddings {
|
||||
let vector_strings: Vec<Protocol> = embedding
|
||||
.iter()
|
||||
.map(|f| Protocol::BulkString(f.to_string()))
|
||||
.collect();
|
||||
response.push(Protocol::Array(vector_strings));
|
||||
}
|
||||
Ok(Protocol::Array(response))
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR {}", e)))),
|
||||
}
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR Lance store not available: {}", e)))),
|
||||
}
|
||||
}
|
||||
|
||||
async fn lance_create_index_cmd(
|
||||
server: &Server,
|
||||
dataset: &str,
|
||||
index_type: &str,
|
||||
num_partitions: Option<usize>,
|
||||
num_sub_vectors: Option<usize>,
|
||||
) -> Result<Protocol, DBError> {
|
||||
match server.lance_store() {
|
||||
Ok(lance_store) => {
|
||||
match lance_store.create_index(dataset, index_type, num_partitions, num_sub_vectors).await {
|
||||
Ok(_) => Ok(Protocol::SimpleString("OK".to_string())),
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR {}", e)))),
|
||||
}
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR Lance store not available: {}", e)))),
|
||||
}
|
||||
}
|
||||
|
||||
async fn lance_list_cmd(server: &Server) -> Result<Protocol, DBError> {
|
||||
match server.lance_store() {
|
||||
Ok(lance_store) => {
|
||||
match lance_store.list_datasets().await {
|
||||
Ok(datasets) => {
|
||||
let response: Vec<Protocol> = datasets
|
||||
.into_iter()
|
||||
.map(Protocol::BulkString)
|
||||
.collect();
|
||||
Ok(Protocol::Array(response))
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR {}", e)))),
|
||||
}
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR Lance store not available: {}", e)))),
|
||||
}
|
||||
}
|
||||
|
||||
async fn lance_drop_cmd(server: &Server, dataset: &str) -> Result<Protocol, DBError> {
|
||||
match server.lance_store() {
|
||||
Ok(lance_store) => {
|
||||
match lance_store.drop_dataset(dataset).await {
|
||||
Ok(_) => Ok(Protocol::SimpleString("OK".to_string())),
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR {}", e)))),
|
||||
}
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR Lance store not available: {}", e)))),
|
||||
}
|
||||
}
|
||||
|
||||
async fn lance_info_cmd(server: &Server, dataset: &str) -> Result<Protocol, DBError> {
|
||||
match server.lance_store() {
|
||||
Ok(lance_store) => {
|
||||
match lance_store.get_dataset_info(dataset).await {
|
||||
Ok(info) => {
|
||||
let mut response = Vec::new();
|
||||
for (key, value) in info {
|
||||
response.push(Protocol::BulkString(key));
|
||||
response.push(Protocol::BulkString(value));
|
||||
}
|
||||
Ok(Protocol::Array(response))
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR {}", e)))),
|
||||
}
|
||||
}
|
||||
Err(e) => Ok(Protocol::err(&sanitize_error_message(&format!("ERR Lance store not available: {}", e)))),
|
||||
}
|
||||
}
|
||||
|
43
src/error.rs
43
src/error.rs
@@ -9,6 +9,12 @@ use bincode;
|
||||
#[derive(Debug)]
|
||||
pub struct DBError(pub String);
|
||||
|
||||
impl std::fmt::Display for DBError {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}", self.0)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<std::io::Error> for DBError {
|
||||
fn from(item: std::io::Error) -> Self {
|
||||
DBError(item.to_string().clone())
|
||||
@@ -92,3 +98,40 @@ impl From<chacha20poly1305::Error> for DBError {
|
||||
DBError(item.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
// Lance and related dependencies error handling
|
||||
impl From<lance::Error> for DBError {
|
||||
fn from(item: lance::Error) -> Self {
|
||||
DBError(item.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<arrow::error::ArrowError> for DBError {
|
||||
fn from(item: arrow::error::ArrowError) -> Self {
|
||||
DBError(item.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<reqwest::Error> for DBError {
|
||||
fn from(item: reqwest::Error) -> Self {
|
||||
DBError(item.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<image::ImageError> for DBError {
|
||||
fn from(item: image::ImageError) -> Self {
|
||||
DBError(item.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<uuid::Error> for DBError {
|
||||
fn from(item: uuid::Error) -> Self {
|
||||
DBError(item.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<base64::DecodeError> for DBError {
|
||||
fn from(item: base64::DecodeError) -> Self {
|
||||
DBError(item.to_string())
|
||||
}
|
||||
}
|
||||
|
609
src/lance_store.rs
Normal file
609
src/lance_store.rs
Normal file
@@ -0,0 +1,609 @@
|
||||
use std::collections::HashMap;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::RwLock;
|
||||
|
||||
use arrow::array::{Float32Array, StringArray, ArrayRef, FixedSizeListArray, Array};
|
||||
use arrow::datatypes::{DataType, Field, Schema, SchemaRef};
|
||||
use arrow::record_batch::{RecordBatch, RecordBatchReader};
|
||||
use arrow::error::ArrowError;
|
||||
use lance::dataset::{Dataset, WriteParams, WriteMode};
|
||||
use lance::index::vector::VectorIndexParams;
|
||||
use lance_index::vector::pq::PQBuildParams;
|
||||
use lance_index::vector::ivf::IvfBuildParams;
|
||||
use lance_index::DatasetIndexExt;
|
||||
use lance_linalg::distance::MetricType;
|
||||
use futures::TryStreamExt;
|
||||
use base64::Engine;
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use crate::error::DBError;
|
||||
|
||||
// Simple RecordBatchReader implementation for Vec<RecordBatch>
|
||||
struct VecRecordBatchReader {
|
||||
batches: std::vec::IntoIter<Result<RecordBatch, ArrowError>>,
|
||||
}
|
||||
|
||||
impl VecRecordBatchReader {
|
||||
fn new(batches: Vec<RecordBatch>) -> Self {
|
||||
let result_batches = batches.into_iter().map(Ok).collect::<Vec<_>>();
|
||||
Self {
|
||||
batches: result_batches.into_iter(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Iterator for VecRecordBatchReader {
|
||||
type Item = Result<RecordBatch, ArrowError>;
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
self.batches.next()
|
||||
}
|
||||
}
|
||||
|
||||
impl RecordBatchReader for VecRecordBatchReader {
|
||||
fn schema(&self) -> SchemaRef {
|
||||
// This is a simplified implementation - in practice you'd want to store the schema
|
||||
Arc::new(Schema::empty())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
struct EmbeddingRequest {
|
||||
texts: Option<Vec<String>>,
|
||||
images: Option<Vec<String>>, // base64 encoded
|
||||
model: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
struct EmbeddingResponse {
|
||||
embeddings: Vec<Vec<f32>>,
|
||||
model: String,
|
||||
usage: Option<HashMap<String, u32>>,
|
||||
}
|
||||
|
||||
// Ollama-specific request/response structures
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
struct OllamaEmbeddingRequest {
|
||||
model: String,
|
||||
prompt: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
struct OllamaEmbeddingResponse {
|
||||
embedding: Vec<f32>,
|
||||
}
|
||||
|
||||
pub struct LanceStore {
|
||||
datasets: Arc<RwLock<HashMap<String, Arc<Dataset>>>>,
|
||||
data_dir: PathBuf,
|
||||
http_client: reqwest::Client,
|
||||
}
|
||||
|
||||
impl LanceStore {
|
||||
pub async fn new(data_dir: PathBuf) -> Result<Self, DBError> {
|
||||
// Create data directory if it doesn't exist
|
||||
std::fs::create_dir_all(&data_dir)
|
||||
.map_err(|e| DBError(format!("Failed to create Lance data directory: {}", e)))?;
|
||||
|
||||
let http_client = reqwest::Client::builder()
|
||||
.timeout(std::time::Duration::from_secs(30))
|
||||
.build()
|
||||
.map_err(|e| DBError(format!("Failed to create HTTP client: {}", e)))?;
|
||||
|
||||
Ok(Self {
|
||||
datasets: Arc::new(RwLock::new(HashMap::new())),
|
||||
data_dir,
|
||||
http_client,
|
||||
})
|
||||
}
|
||||
|
||||
/// Get embedding service URL from Redis config, default to local Ollama
|
||||
async fn get_embedding_url(&self, server: &crate::server::Server) -> Result<String, DBError> {
|
||||
// Get the embedding URL from Redis config directly from storage
|
||||
let storage = server.current_storage()?;
|
||||
match storage.hget("config:core:aiembed", "url")? {
|
||||
Some(url) => Ok(url),
|
||||
None => Ok("http://localhost:11434".to_string()), // Default to local Ollama
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if we're using Ollama (default) or custom embedding service
|
||||
async fn is_ollama_service(&self, server: &crate::server::Server) -> Result<bool, DBError> {
|
||||
let url = self.get_embedding_url(server).await?;
|
||||
Ok(url.contains("localhost:11434") || url.contains("127.0.0.1:11434"))
|
||||
}
|
||||
|
||||
/// Call external embedding service (Ollama or custom)
|
||||
async fn call_embedding_service(
|
||||
&self,
|
||||
server: &crate::server::Server,
|
||||
texts: Option<Vec<String>>,
|
||||
images: Option<Vec<String>>,
|
||||
) -> Result<Vec<Vec<f32>>, DBError> {
|
||||
let base_url = self.get_embedding_url(server).await?;
|
||||
let is_ollama = self.is_ollama_service(server).await?;
|
||||
|
||||
if is_ollama {
|
||||
// Use Ollama API format
|
||||
if let Some(texts) = texts {
|
||||
let mut embeddings = Vec::new();
|
||||
for text in texts {
|
||||
let url = format!("{}/api/embeddings", base_url);
|
||||
let request = OllamaEmbeddingRequest {
|
||||
model: "nomic-embed-text".to_string(),
|
||||
prompt: text,
|
||||
};
|
||||
|
||||
let response = self.http_client
|
||||
.post(&url)
|
||||
.json(&request)
|
||||
.send()
|
||||
.await
|
||||
.map_err(|e| DBError(format!("Failed to call Ollama embedding service: {}", e)))?;
|
||||
|
||||
if !response.status().is_success() {
|
||||
let status = response.status();
|
||||
let error_text = response.text().await.unwrap_or_default();
|
||||
return Err(DBError(format!(
|
||||
"Ollama embedding service returned error {}: {}",
|
||||
status, error_text
|
||||
)));
|
||||
}
|
||||
|
||||
let ollama_response: OllamaEmbeddingResponse = response
|
||||
.json()
|
||||
.await
|
||||
.map_err(|e| DBError(format!("Failed to parse Ollama embedding response: {}", e)))?;
|
||||
|
||||
embeddings.push(ollama_response.embedding);
|
||||
}
|
||||
Ok(embeddings)
|
||||
} else if let Some(_images) = images {
|
||||
// Ollama doesn't support image embeddings with this API yet
|
||||
Err(DBError("Image embeddings not supported with Ollama. Please configure a custom embedding service.".to_string()))
|
||||
} else {
|
||||
Err(DBError("No text or images provided for embedding".to_string()))
|
||||
}
|
||||
} else {
|
||||
// Use custom embedding service API format
|
||||
let request = EmbeddingRequest {
|
||||
texts,
|
||||
images,
|
||||
model: None, // Let the service use its default
|
||||
};
|
||||
|
||||
let response = self.http_client
|
||||
.post(&base_url)
|
||||
.json(&request)
|
||||
.send()
|
||||
.await
|
||||
.map_err(|e| DBError(format!("Failed to call embedding service: {}", e)))?;
|
||||
|
||||
if !response.status().is_success() {
|
||||
let status = response.status();
|
||||
let error_text = response.text().await.unwrap_or_default();
|
||||
return Err(DBError(format!(
|
||||
"Embedding service returned error {}: {}",
|
||||
status, error_text
|
||||
)));
|
||||
}
|
||||
|
||||
let embedding_response: EmbeddingResponse = response
|
||||
.json()
|
||||
.await
|
||||
.map_err(|e| DBError(format!("Failed to parse embedding response: {}", e)))?;
|
||||
|
||||
Ok(embedding_response.embeddings)
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn embed_text(
|
||||
&self,
|
||||
server: &crate::server::Server,
|
||||
texts: Vec<String>
|
||||
) -> Result<Vec<Vec<f32>>, DBError> {
|
||||
if texts.is_empty() {
|
||||
return Ok(Vec::new());
|
||||
}
|
||||
|
||||
self.call_embedding_service(server, Some(texts), None).await
|
||||
}
|
||||
|
||||
pub async fn embed_image(
|
||||
&self,
|
||||
server: &crate::server::Server,
|
||||
image_bytes: Vec<u8>
|
||||
) -> Result<Vec<f32>, DBError> {
|
||||
// Convert image bytes to base64
|
||||
let base64_image = base64::engine::general_purpose::STANDARD.encode(&image_bytes);
|
||||
|
||||
let embeddings = self.call_embedding_service(
|
||||
server,
|
||||
None,
|
||||
Some(vec![base64_image])
|
||||
).await?;
|
||||
|
||||
embeddings.into_iter()
|
||||
.next()
|
||||
.ok_or_else(|| DBError("No embedding returned for image".to_string()))
|
||||
}
|
||||
|
||||
pub async fn create_dataset(
|
||||
&self,
|
||||
name: &str,
|
||||
schema: Schema,
|
||||
) -> Result<(), DBError> {
|
||||
let dataset_path = self.data_dir.join(format!("{}.lance", name));
|
||||
|
||||
// Create empty dataset with schema
|
||||
let write_params = WriteParams {
|
||||
mode: WriteMode::Create,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
// Create an empty RecordBatch with the schema
|
||||
let empty_batch = RecordBatch::new_empty(Arc::new(schema));
|
||||
|
||||
// Use RecordBatchReader for Lance 0.33
|
||||
let reader = VecRecordBatchReader::new(vec![empty_batch]);
|
||||
let dataset = Dataset::write(
|
||||
reader,
|
||||
dataset_path.to_str().unwrap(),
|
||||
Some(write_params)
|
||||
).await
|
||||
.map_err(|e| DBError(format!("Failed to create dataset: {}", e)))?;
|
||||
|
||||
let mut datasets = self.datasets.write().await;
|
||||
datasets.insert(name.to_string(), Arc::new(dataset));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn write_vectors(
|
||||
&self,
|
||||
dataset_name: &str,
|
||||
vectors: Vec<Vec<f32>>,
|
||||
metadata: Option<HashMap<String, Vec<String>>>,
|
||||
) -> Result<usize, DBError> {
|
||||
let dataset_path = self.data_dir.join(format!("{}.lance", dataset_name));
|
||||
|
||||
// Open or get cached dataset
|
||||
let _dataset = self.get_or_open_dataset(dataset_name).await?;
|
||||
|
||||
// Build RecordBatch
|
||||
let num_vectors = vectors.len();
|
||||
if num_vectors == 0 {
|
||||
return Ok(0);
|
||||
}
|
||||
|
||||
let dim = vectors.first()
|
||||
.ok_or_else(|| DBError("Empty vectors".to_string()))?
|
||||
.len();
|
||||
|
||||
// Flatten vectors
|
||||
let flat_vectors: Vec<f32> = vectors.into_iter().flatten().collect();
|
||||
let values_array = Float32Array::from(flat_vectors);
|
||||
let field = Arc::new(Field::new("item", DataType::Float32, true));
|
||||
let vector_array = FixedSizeListArray::try_new(
|
||||
field,
|
||||
dim as i32,
|
||||
Arc::new(values_array),
|
||||
None
|
||||
).map_err(|e| DBError(format!("Failed to create vector array: {}", e)))?;
|
||||
|
||||
let mut arrays: Vec<ArrayRef> = vec![Arc::new(vector_array)];
|
||||
let mut fields = vec![Field::new(
|
||||
"vector",
|
||||
DataType::FixedSizeList(
|
||||
Arc::new(Field::new("item", DataType::Float32, true)),
|
||||
dim as i32
|
||||
),
|
||||
false
|
||||
)];
|
||||
|
||||
// Add metadata columns if provided
|
||||
if let Some(metadata) = metadata {
|
||||
for (key, values) in metadata {
|
||||
if values.len() != num_vectors {
|
||||
return Err(DBError(format!(
|
||||
"Metadata field '{}' has {} values but expected {}",
|
||||
key, values.len(), num_vectors
|
||||
)));
|
||||
}
|
||||
let array = StringArray::from(values);
|
||||
arrays.push(Arc::new(array));
|
||||
fields.push(Field::new(&key, DataType::Utf8, true));
|
||||
}
|
||||
}
|
||||
|
||||
let schema = Arc::new(Schema::new(fields));
|
||||
let batch = RecordBatch::try_new(schema, arrays)
|
||||
.map_err(|e| DBError(format!("Failed to create RecordBatch: {}", e)))?;
|
||||
|
||||
// Append to dataset
|
||||
let write_params = WriteParams {
|
||||
mode: WriteMode::Append,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let reader = VecRecordBatchReader::new(vec![batch]);
|
||||
Dataset::write(
|
||||
reader,
|
||||
dataset_path.to_str().unwrap(),
|
||||
Some(write_params)
|
||||
).await
|
||||
.map_err(|e| DBError(format!("Failed to write to dataset: {}", e)))?;
|
||||
|
||||
// Refresh cached dataset
|
||||
let mut datasets = self.datasets.write().await;
|
||||
datasets.remove(dataset_name);
|
||||
|
||||
Ok(num_vectors)
|
||||
}
|
||||
|
||||
pub async fn search_vectors(
|
||||
&self,
|
||||
dataset_name: &str,
|
||||
query_vector: Vec<f32>,
|
||||
k: usize,
|
||||
nprobes: Option<usize>,
|
||||
_refine_factor: Option<usize>,
|
||||
) -> Result<Vec<(f32, HashMap<String, String>)>, DBError> {
|
||||
let dataset = self.get_or_open_dataset(dataset_name).await?;
|
||||
|
||||
// Build query
|
||||
let query_array = Float32Array::from(query_vector.clone());
|
||||
let mut query = dataset.scan();
|
||||
query.nearest(
|
||||
"vector",
|
||||
&query_array,
|
||||
k,
|
||||
).map_err(|e| DBError(format!("Failed to build search query: {}", e)))?;
|
||||
|
||||
if let Some(nprobes) = nprobes {
|
||||
query.nprobs(nprobes);
|
||||
}
|
||||
|
||||
// Note: refine_factor might not be available in this Lance version
|
||||
// if let Some(refine) = refine_factor {
|
||||
// query.refine_factor(refine);
|
||||
// }
|
||||
|
||||
// Execute search
|
||||
let results = query
|
||||
.try_into_stream()
|
||||
.await
|
||||
.map_err(|e| DBError(format!("Failed to execute search: {}", e)))?
|
||||
.try_collect::<Vec<_>>()
|
||||
.await
|
||||
.map_err(|e| DBError(format!("Failed to collect results: {}", e)))?;
|
||||
|
||||
// Process results
|
||||
let mut output = Vec::new();
|
||||
for batch in results {
|
||||
// Get distances
|
||||
let distances = batch
|
||||
.column_by_name("_distance")
|
||||
.ok_or_else(|| DBError("No distance column".to_string()))?
|
||||
.as_any()
|
||||
.downcast_ref::<Float32Array>()
|
||||
.ok_or_else(|| DBError("Invalid distance type".to_string()))?;
|
||||
|
||||
// Get metadata
|
||||
for i in 0..batch.num_rows() {
|
||||
let distance = distances.value(i);
|
||||
let mut metadata = HashMap::new();
|
||||
|
||||
for field in batch.schema().fields() {
|
||||
if field.name() != "vector" && field.name() != "_distance" {
|
||||
if let Some(col) = batch.column_by_name(field.name()) {
|
||||
if let Some(str_array) = col.as_any().downcast_ref::<StringArray>() {
|
||||
if !str_array.is_null(i) {
|
||||
metadata.insert(
|
||||
field.name().to_string(),
|
||||
str_array.value(i).to_string()
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output.push((distance, metadata));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(output)
|
||||
}
|
||||
|
||||
pub async fn store_multimodal(
|
||||
&self,
|
||||
server: &crate::server::Server,
|
||||
dataset_name: &str,
|
||||
text: Option<String>,
|
||||
image_bytes: Option<Vec<u8>>,
|
||||
metadata: HashMap<String, String>,
|
||||
) -> Result<String, DBError> {
|
||||
// Generate ID
|
||||
let id = uuid::Uuid::new_v4().to_string();
|
||||
|
||||
// Generate embeddings using external service
|
||||
let embedding = if let Some(text) = text.as_ref() {
|
||||
self.embed_text(server, vec![text.clone()]).await?
|
||||
.into_iter()
|
||||
.next()
|
||||
.ok_or_else(|| DBError("No embedding returned".to_string()))?
|
||||
} else if let Some(img) = image_bytes.as_ref() {
|
||||
self.embed_image(server, img.clone()).await?
|
||||
} else {
|
||||
return Err(DBError("No text or image provided".to_string()));
|
||||
};
|
||||
|
||||
// Prepare metadata
|
||||
let mut full_metadata = metadata;
|
||||
full_metadata.insert("id".to_string(), id.clone());
|
||||
if let Some(text) = text {
|
||||
full_metadata.insert("text".to_string(), text);
|
||||
}
|
||||
if let Some(img) = image_bytes {
|
||||
full_metadata.insert("image_base64".to_string(), base64::engine::general_purpose::STANDARD.encode(img));
|
||||
}
|
||||
|
||||
// Convert metadata to column vectors
|
||||
let mut metadata_cols = HashMap::new();
|
||||
for (key, value) in full_metadata {
|
||||
metadata_cols.insert(key, vec![value]);
|
||||
}
|
||||
|
||||
// Write to dataset
|
||||
self.write_vectors(dataset_name, vec![embedding], Some(metadata_cols)).await?;
|
||||
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
pub async fn search_with_text(
|
||||
&self,
|
||||
server: &crate::server::Server,
|
||||
dataset_name: &str,
|
||||
query_text: String,
|
||||
k: usize,
|
||||
nprobes: Option<usize>,
|
||||
refine_factor: Option<usize>,
|
||||
) -> Result<Vec<(f32, HashMap<String, String>)>, DBError> {
|
||||
// Embed the query text using external service
|
||||
let embeddings = self.embed_text(server, vec![query_text]).await?;
|
||||
let query_vector = embeddings.into_iter()
|
||||
.next()
|
||||
.ok_or_else(|| DBError("No embedding returned for query".to_string()))?;
|
||||
|
||||
// Search with the embedding
|
||||
self.search_vectors(dataset_name, query_vector, k, nprobes, refine_factor).await
|
||||
}
|
||||
|
||||
pub async fn create_index(
|
||||
&self,
|
||||
dataset_name: &str,
|
||||
index_type: &str,
|
||||
num_partitions: Option<usize>,
|
||||
num_sub_vectors: Option<usize>,
|
||||
) -> Result<(), DBError> {
|
||||
let _dataset = self.get_or_open_dataset(dataset_name).await?;
|
||||
|
||||
match index_type.to_uppercase().as_str() {
|
||||
"IVF_PQ" => {
|
||||
let ivf_params = IvfBuildParams {
|
||||
num_partitions: num_partitions.unwrap_or(256),
|
||||
..Default::default()
|
||||
};
|
||||
let pq_params = PQBuildParams {
|
||||
num_sub_vectors: num_sub_vectors.unwrap_or(16),
|
||||
..Default::default()
|
||||
};
|
||||
let params = VectorIndexParams::with_ivf_pq_params(
|
||||
MetricType::L2,
|
||||
ivf_params,
|
||||
pq_params,
|
||||
);
|
||||
|
||||
// Get a mutable reference to the dataset
|
||||
let mut dataset_mut = Dataset::open(self.data_dir.join(format!("{}.lance", dataset_name)).to_str().unwrap())
|
||||
.await
|
||||
.map_err(|e| DBError(format!("Failed to open dataset for indexing: {}", e)))?;
|
||||
|
||||
dataset_mut.create_index(
|
||||
&["vector"],
|
||||
lance_index::IndexType::Vector,
|
||||
None,
|
||||
¶ms,
|
||||
true
|
||||
).await
|
||||
.map_err(|e| DBError(format!("Failed to create index: {}", e)))?;
|
||||
}
|
||||
_ => return Err(DBError(format!("Unsupported index type: {}", index_type))),
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn get_or_open_dataset(&self, name: &str) -> Result<Arc<Dataset>, DBError> {
|
||||
let mut datasets = self.datasets.write().await;
|
||||
|
||||
if let Some(dataset) = datasets.get(name) {
|
||||
return Ok(dataset.clone());
|
||||
}
|
||||
|
||||
let dataset_path = self.data_dir.join(format!("{}.lance", name));
|
||||
if !dataset_path.exists() {
|
||||
return Err(DBError(format!("Dataset '{}' does not exist", name)));
|
||||
}
|
||||
|
||||
let dataset = Dataset::open(dataset_path.to_str().unwrap())
|
||||
.await
|
||||
.map_err(|e| DBError(format!("Failed to open dataset: {}", e)))?;
|
||||
|
||||
let dataset = Arc::new(dataset);
|
||||
datasets.insert(name.to_string(), dataset.clone());
|
||||
|
||||
Ok(dataset)
|
||||
}
|
||||
|
||||
pub async fn list_datasets(&self) -> Result<Vec<String>, DBError> {
|
||||
let mut datasets = Vec::new();
|
||||
|
||||
let entries = std::fs::read_dir(&self.data_dir)
|
||||
.map_err(|e| DBError(format!("Failed to read data directory: {}", e)))?;
|
||||
|
||||
for entry in entries {
|
||||
let entry = entry.map_err(|e| DBError(format!("Failed to read entry: {}", e)))?;
|
||||
let path = entry.path();
|
||||
|
||||
if path.is_dir() {
|
||||
if let Some(name) = path.file_name() {
|
||||
if let Some(name_str) = name.to_str() {
|
||||
if name_str.ends_with(".lance") {
|
||||
let dataset_name = name_str.trim_end_matches(".lance");
|
||||
datasets.push(dataset_name.to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(datasets)
|
||||
}
|
||||
|
||||
pub async fn drop_dataset(&self, name: &str) -> Result<(), DBError> {
|
||||
// Remove from cache
|
||||
let mut datasets = self.datasets.write().await;
|
||||
datasets.remove(name);
|
||||
|
||||
// Delete from disk
|
||||
let dataset_path = self.data_dir.join(format!("{}.lance", name));
|
||||
if dataset_path.exists() {
|
||||
std::fs::remove_dir_all(dataset_path)
|
||||
.map_err(|e| DBError(format!("Failed to delete dataset: {}", e)))?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn get_dataset_info(&self, name: &str) -> Result<HashMap<String, String>, DBError> {
|
||||
let dataset = self.get_or_open_dataset(name).await?;
|
||||
|
||||
let mut info = HashMap::new();
|
||||
info.insert("name".to_string(), name.to_string());
|
||||
info.insert("version".to_string(), dataset.version().version.to_string());
|
||||
info.insert("num_rows".to_string(), dataset.count_rows(None).await?.to_string());
|
||||
|
||||
// Get schema info
|
||||
let schema = dataset.schema();
|
||||
let fields: Vec<String> = schema.fields
|
||||
.iter()
|
||||
.map(|f| format!("{}:{}", f.name, f.data_type()))
|
||||
.collect();
|
||||
info.insert("schema".to_string(), fields.join(", "));
|
||||
|
||||
Ok(info)
|
||||
}
|
||||
}
|
@@ -2,11 +2,10 @@ pub mod age; // NEW
|
||||
pub mod cmd;
|
||||
pub mod crypto;
|
||||
pub mod error;
|
||||
pub mod lance_store; // Add Lance store module
|
||||
pub mod options;
|
||||
pub mod protocol;
|
||||
pub mod search_cmd; // Add this
|
||||
pub mod server;
|
||||
pub mod storage;
|
||||
pub mod storage_trait; // Add this
|
||||
pub mod storage_sled; // Add this
|
||||
pub mod tantivy_search;
|
||||
|
@@ -1,272 +0,0 @@
|
||||
use crate::{
|
||||
error::DBError,
|
||||
protocol::Protocol,
|
||||
server::Server,
|
||||
tantivy_search::{
|
||||
TantivySearch, FieldDef, NumericType, IndexConfig,
|
||||
SearchOptions, Filter, FilterType
|
||||
},
|
||||
};
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
|
||||
pub async fn ft_create_cmd(
|
||||
server: &Server,
|
||||
index_name: String,
|
||||
schema: Vec<(String, String, Vec<String>)>,
|
||||
) -> Result<Protocol, DBError> {
|
||||
// Parse schema into field definitions
|
||||
let mut field_definitions = Vec::new();
|
||||
|
||||
for (field_name, field_type, options) in schema {
|
||||
let field_def = match field_type.to_uppercase().as_str() {
|
||||
"TEXT" => {
|
||||
let mut weight = 1.0;
|
||||
let mut sortable = false;
|
||||
let mut no_index = false;
|
||||
|
||||
for opt in &options {
|
||||
match opt.to_uppercase().as_str() {
|
||||
"WEIGHT" => {
|
||||
// Next option should be the weight value
|
||||
if let Some(idx) = options.iter().position(|x| x == opt) {
|
||||
if idx + 1 < options.len() {
|
||||
weight = options[idx + 1].parse().unwrap_or(1.0);
|
||||
}
|
||||
}
|
||||
}
|
||||
"SORTABLE" => sortable = true,
|
||||
"NOINDEX" => no_index = true,
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
FieldDef::Text {
|
||||
stored: true,
|
||||
indexed: !no_index,
|
||||
tokenized: true,
|
||||
fast: sortable,
|
||||
}
|
||||
}
|
||||
"NUMERIC" => {
|
||||
let mut sortable = false;
|
||||
|
||||
for opt in &options {
|
||||
if opt.to_uppercase() == "SORTABLE" {
|
||||
sortable = true;
|
||||
}
|
||||
}
|
||||
|
||||
FieldDef::Numeric {
|
||||
stored: true,
|
||||
indexed: true,
|
||||
fast: sortable,
|
||||
precision: NumericType::F64,
|
||||
}
|
||||
}
|
||||
"TAG" => {
|
||||
let mut separator = ",".to_string();
|
||||
let mut case_sensitive = false;
|
||||
|
||||
for i in 0..options.len() {
|
||||
match options[i].to_uppercase().as_str() {
|
||||
"SEPARATOR" => {
|
||||
if i + 1 < options.len() {
|
||||
separator = options[i + 1].clone();
|
||||
}
|
||||
}
|
||||
"CASESENSITIVE" => case_sensitive = true,
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
FieldDef::Tag {
|
||||
stored: true,
|
||||
separator,
|
||||
case_sensitive,
|
||||
}
|
||||
}
|
||||
"GEO" => {
|
||||
FieldDef::Geo { stored: true }
|
||||
}
|
||||
_ => {
|
||||
return Err(DBError(format!("Unknown field type: {}", field_type)));
|
||||
}
|
||||
};
|
||||
|
||||
field_definitions.push((field_name, field_def));
|
||||
}
|
||||
|
||||
// Create the search index
|
||||
let search_path = server.search_index_path();
|
||||
let config = IndexConfig::default();
|
||||
|
||||
println!("Creating search index '{}' at path: {:?}", index_name, search_path);
|
||||
println!("Field definitions: {:?}", field_definitions);
|
||||
|
||||
let search_index = TantivySearch::new_with_schema(
|
||||
search_path,
|
||||
index_name.clone(),
|
||||
field_definitions,
|
||||
Some(config),
|
||||
)?;
|
||||
|
||||
println!("Search index '{}' created successfully", index_name);
|
||||
|
||||
// Store in registry
|
||||
let mut indexes = server.search_indexes.write().unwrap();
|
||||
indexes.insert(index_name, Arc::new(search_index));
|
||||
|
||||
Ok(Protocol::SimpleString("OK".to_string()))
|
||||
}
|
||||
|
||||
pub async fn ft_add_cmd(
|
||||
server: &Server,
|
||||
index_name: String,
|
||||
doc_id: String,
|
||||
_score: f64,
|
||||
fields: HashMap<String, String>,
|
||||
) -> Result<Protocol, DBError> {
|
||||
let indexes = server.search_indexes.read().unwrap();
|
||||
|
||||
let search_index = indexes.get(&index_name)
|
||||
.ok_or_else(|| DBError(format!("Index '{}' not found", index_name)))?;
|
||||
|
||||
search_index.add_document_with_fields(&doc_id, fields)?;
|
||||
|
||||
Ok(Protocol::SimpleString("OK".to_string()))
|
||||
}
|
||||
|
||||
pub async fn ft_search_cmd(
|
||||
server: &Server,
|
||||
index_name: String,
|
||||
query: String,
|
||||
filters: Vec<(String, String)>,
|
||||
limit: Option<usize>,
|
||||
offset: Option<usize>,
|
||||
return_fields: Option<Vec<String>>,
|
||||
) -> Result<Protocol, DBError> {
|
||||
let indexes = server.search_indexes.read().unwrap();
|
||||
|
||||
let search_index = indexes.get(&index_name)
|
||||
.ok_or_else(|| DBError(format!("Index '{}' not found", index_name)))?;
|
||||
|
||||
// Convert filters to search filters
|
||||
let search_filters = filters.into_iter().map(|(field, value)| {
|
||||
Filter {
|
||||
field,
|
||||
filter_type: FilterType::Equals(value),
|
||||
}
|
||||
}).collect();
|
||||
|
||||
let options = SearchOptions {
|
||||
limit: limit.unwrap_or(10),
|
||||
offset: offset.unwrap_or(0),
|
||||
filters: search_filters,
|
||||
sort_by: None,
|
||||
return_fields,
|
||||
highlight: false,
|
||||
};
|
||||
|
||||
let results = search_index.search_with_options(&query, options)?;
|
||||
|
||||
// Format results as Redis protocol
|
||||
let mut response = Vec::new();
|
||||
|
||||
// First element is the total count
|
||||
response.push(Protocol::SimpleString(results.total.to_string()));
|
||||
|
||||
// Then each document
|
||||
for doc in results.documents {
|
||||
let mut doc_array = Vec::new();
|
||||
|
||||
// Add document ID if it exists
|
||||
if let Some(id) = doc.fields.get("_id") {
|
||||
doc_array.push(Protocol::BulkString(id.clone()));
|
||||
}
|
||||
|
||||
// Add score
|
||||
doc_array.push(Protocol::BulkString(doc.score.to_string()));
|
||||
|
||||
// Add fields as key-value pairs
|
||||
for (field_name, field_value) in doc.fields {
|
||||
if field_name != "_id" {
|
||||
doc_array.push(Protocol::BulkString(field_name));
|
||||
doc_array.push(Protocol::BulkString(field_value));
|
||||
}
|
||||
}
|
||||
|
||||
response.push(Protocol::Array(doc_array));
|
||||
}
|
||||
|
||||
Ok(Protocol::Array(response))
|
||||
}
|
||||
|
||||
pub async fn ft_del_cmd(
|
||||
server: &Server,
|
||||
index_name: String,
|
||||
doc_id: String,
|
||||
) -> Result<Protocol, DBError> {
|
||||
let indexes = server.search_indexes.read().unwrap();
|
||||
|
||||
let _search_index = indexes.get(&index_name)
|
||||
.ok_or_else(|| DBError(format!("Index '{}' not found", index_name)))?;
|
||||
|
||||
// For now, return success
|
||||
// In a full implementation, we'd need to add a delete method to TantivySearch
|
||||
println!("Deleting document '{}' from index '{}'", doc_id, index_name);
|
||||
|
||||
Ok(Protocol::SimpleString("1".to_string()))
|
||||
}
|
||||
|
||||
pub async fn ft_info_cmd(
|
||||
server: &Server,
|
||||
index_name: String,
|
||||
) -> Result<Protocol, DBError> {
|
||||
let indexes = server.search_indexes.read().unwrap();
|
||||
|
||||
let search_index = indexes.get(&index_name)
|
||||
.ok_or_else(|| DBError(format!("Index '{}' not found", index_name)))?;
|
||||
|
||||
let info = search_index.get_info()?;
|
||||
|
||||
// Format info as Redis protocol
|
||||
let mut response = Vec::new();
|
||||
|
||||
response.push(Protocol::BulkString("index_name".to_string()));
|
||||
response.push(Protocol::BulkString(info.name));
|
||||
|
||||
response.push(Protocol::BulkString("num_docs".to_string()));
|
||||
response.push(Protocol::BulkString(info.num_docs.to_string()));
|
||||
|
||||
response.push(Protocol::BulkString("num_fields".to_string()));
|
||||
response.push(Protocol::BulkString(info.fields.len().to_string()));
|
||||
|
||||
response.push(Protocol::BulkString("fields".to_string()));
|
||||
let fields_str = info.fields.iter()
|
||||
.map(|f| format!("{}:{}", f.name, f.field_type))
|
||||
.collect::<Vec<_>>()
|
||||
.join(", ");
|
||||
response.push(Protocol::BulkString(fields_str));
|
||||
|
||||
Ok(Protocol::Array(response))
|
||||
}
|
||||
|
||||
pub async fn ft_drop_cmd(
|
||||
server: &Server,
|
||||
index_name: String,
|
||||
) -> Result<Protocol, DBError> {
|
||||
let mut indexes = server.search_indexes.write().unwrap();
|
||||
|
||||
if indexes.remove(&index_name).is_some() {
|
||||
// Also remove the index files from disk
|
||||
let index_path = server.search_index_path().join(&index_name);
|
||||
if index_path.exists() {
|
||||
std::fs::remove_dir_all(index_path)
|
||||
.map_err(|e| DBError(format!("Failed to remove index files: {}", e)))?;
|
||||
}
|
||||
Ok(Protocol::SimpleString("OK".to_string()))
|
||||
} else {
|
||||
Err(DBError(format!("Index '{}' not found", index_name)))
|
||||
}
|
||||
}
|
@@ -4,23 +4,21 @@ use std::sync::Arc;
|
||||
use tokio::io::AsyncReadExt;
|
||||
use tokio::io::AsyncWriteExt;
|
||||
use tokio::sync::{Mutex, oneshot};
|
||||
use std::sync::RwLock;
|
||||
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
|
||||
use crate::cmd::Cmd;
|
||||
use crate::error::DBError;
|
||||
use crate::lance_store::LanceStore;
|
||||
use crate::options;
|
||||
use crate::protocol::Protocol;
|
||||
use crate::storage::Storage;
|
||||
use crate::storage_sled::SledStorage;
|
||||
use crate::storage_trait::StorageBackend;
|
||||
use crate::tantivy_search::TantivySearch;
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct Server {
|
||||
pub db_cache: Arc<RwLock<HashMap<u64, Arc<dyn StorageBackend>>>>,
|
||||
pub search_indexes: Arc<RwLock<HashMap<String, Arc<TantivySearch>>>>,
|
||||
pub db_cache: std::sync::Arc<std::sync::RwLock<HashMap<u64, Arc<dyn StorageBackend>>>>,
|
||||
pub option: options::DBOption,
|
||||
pub client_name: Option<String>,
|
||||
pub selected_db: u64, // Changed from usize to u64
|
||||
@@ -29,6 +27,9 @@ pub struct Server {
|
||||
// BLPOP waiter registry: per (db_index, key) FIFO of waiters
|
||||
pub list_waiters: Arc<Mutex<HashMap<u64, HashMap<String, Vec<Waiter>>>>>,
|
||||
pub waiter_seq: Arc<AtomicU64>,
|
||||
|
||||
// Lance vector store
|
||||
pub lance_store: Option<Arc<LanceStore>>,
|
||||
}
|
||||
|
||||
pub struct Waiter {
|
||||
@@ -45,9 +46,18 @@ pub enum PopSide {
|
||||
|
||||
impl Server {
|
||||
pub async fn new(option: options::DBOption) -> Self {
|
||||
// Initialize Lance store
|
||||
let lance_data_dir = std::path::PathBuf::from(&option.dir).join("lance");
|
||||
let lance_store = match LanceStore::new(lance_data_dir).await {
|
||||
Ok(store) => Some(Arc::new(store)),
|
||||
Err(e) => {
|
||||
eprintln!("Warning: Failed to initialize Lance store: {}", e.0);
|
||||
None
|
||||
}
|
||||
};
|
||||
|
||||
Server {
|
||||
db_cache: Arc::new(RwLock::new(HashMap::new())),
|
||||
search_indexes: Arc::new(RwLock::new(HashMap::new())),
|
||||
db_cache: Arc::new(std::sync::RwLock::new(HashMap::new())),
|
||||
option,
|
||||
client_name: None,
|
||||
selected_db: 0,
|
||||
@@ -55,9 +65,17 @@ impl Server {
|
||||
|
||||
list_waiters: Arc::new(Mutex::new(HashMap::new())),
|
||||
waiter_seq: Arc::new(AtomicU64::new(1)),
|
||||
lance_store,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn lance_store(&self) -> Result<Arc<LanceStore>, DBError> {
|
||||
self.lance_store
|
||||
.as_ref()
|
||||
.cloned()
|
||||
.ok_or_else(|| DBError("Lance store not initialized".to_string()))
|
||||
}
|
||||
|
||||
pub fn current_storage(&self) -> Result<Arc<dyn StorageBackend>, DBError> {
|
||||
let mut cache = self.db_cache.write().unwrap();
|
||||
|
||||
@@ -104,11 +122,6 @@ impl Server {
|
||||
// DB 0-9 are non-encrypted, DB 10+ are encrypted
|
||||
self.option.encrypt && db_index >= 10
|
||||
}
|
||||
|
||||
// Add method to get search index path
|
||||
pub fn search_index_path(&self) -> std::path::PathBuf {
|
||||
std::path::PathBuf::from(&self.option.dir).join("search_indexes")
|
||||
}
|
||||
|
||||
// ----- BLPOP waiter helpers -----
|
||||
|
||||
|
@@ -1,567 +0,0 @@
|
||||
use tantivy::{
|
||||
collector::TopDocs,
|
||||
directory::MmapDirectory,
|
||||
query::{QueryParser, BooleanQuery, Query, TermQuery, Occur},
|
||||
schema::{Schema, Field, TextOptions, TextFieldIndexing,
|
||||
STORED, STRING, Value},
|
||||
Index, IndexWriter, IndexReader, ReloadPolicy,
|
||||
Term, DateTime, TantivyDocument,
|
||||
tokenizer::{TokenizerManager},
|
||||
};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::{Arc, RwLock};
|
||||
use std::collections::HashMap;
|
||||
use crate::error::DBError;
|
||||
use serde::{Serialize, Deserialize};
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub enum FieldDef {
|
||||
Text {
|
||||
stored: bool,
|
||||
indexed: bool,
|
||||
tokenized: bool,
|
||||
fast: bool,
|
||||
},
|
||||
Numeric {
|
||||
stored: bool,
|
||||
indexed: bool,
|
||||
fast: bool,
|
||||
precision: NumericType,
|
||||
},
|
||||
Tag {
|
||||
stored: bool,
|
||||
separator: String,
|
||||
case_sensitive: bool,
|
||||
},
|
||||
Geo {
|
||||
stored: bool,
|
||||
},
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub enum NumericType {
|
||||
I64,
|
||||
U64,
|
||||
F64,
|
||||
Date,
|
||||
}
|
||||
|
||||
pub struct IndexSchema {
|
||||
schema: Schema,
|
||||
fields: HashMap<String, (Field, FieldDef)>,
|
||||
default_search_fields: Vec<Field>,
|
||||
}
|
||||
|
||||
pub struct TantivySearch {
|
||||
index: Index,
|
||||
writer: Arc<RwLock<IndexWriter>>,
|
||||
reader: IndexReader,
|
||||
index_schema: IndexSchema,
|
||||
name: String,
|
||||
config: IndexConfig,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct IndexConfig {
|
||||
pub language: String,
|
||||
pub stopwords: Vec<String>,
|
||||
pub stemming: bool,
|
||||
pub max_doc_count: Option<usize>,
|
||||
pub default_score: f64,
|
||||
}
|
||||
|
||||
impl Default for IndexConfig {
|
||||
fn default() -> Self {
|
||||
IndexConfig {
|
||||
language: "english".to_string(),
|
||||
stopwords: vec![],
|
||||
stemming: true,
|
||||
max_doc_count: None,
|
||||
default_score: 1.0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl TantivySearch {
|
||||
pub fn new_with_schema(
|
||||
base_path: PathBuf,
|
||||
name: String,
|
||||
field_definitions: Vec<(String, FieldDef)>,
|
||||
config: Option<IndexConfig>,
|
||||
) -> Result<Self, DBError> {
|
||||
let index_path = base_path.join(&name);
|
||||
std::fs::create_dir_all(&index_path)
|
||||
.map_err(|e| DBError(format!("Failed to create index dir: {}", e)))?;
|
||||
|
||||
// Build schema from field definitions
|
||||
let mut schema_builder = Schema::builder();
|
||||
let mut fields = HashMap::new();
|
||||
let mut default_search_fields = Vec::new();
|
||||
|
||||
// Always add a document ID field
|
||||
let id_field = schema_builder.add_text_field("_id", STRING | STORED);
|
||||
fields.insert("_id".to_string(), (id_field, FieldDef::Text {
|
||||
stored: true,
|
||||
indexed: true,
|
||||
tokenized: false,
|
||||
fast: false,
|
||||
}));
|
||||
|
||||
// Add user-defined fields
|
||||
for (field_name, field_def) in field_definitions {
|
||||
let field = match &field_def {
|
||||
FieldDef::Text { stored, indexed, tokenized, fast: _fast } => {
|
||||
let mut text_options = TextOptions::default();
|
||||
|
||||
if *stored {
|
||||
text_options = text_options.set_stored();
|
||||
}
|
||||
|
||||
if *indexed {
|
||||
let indexing_options = if *tokenized {
|
||||
TextFieldIndexing::default()
|
||||
.set_tokenizer("default")
|
||||
.set_index_option(tantivy::schema::IndexRecordOption::WithFreqsAndPositions)
|
||||
} else {
|
||||
TextFieldIndexing::default()
|
||||
.set_tokenizer("raw")
|
||||
.set_index_option(tantivy::schema::IndexRecordOption::Basic)
|
||||
};
|
||||
text_options = text_options.set_indexing_options(indexing_options);
|
||||
|
||||
let f = schema_builder.add_text_field(&field_name, text_options);
|
||||
if *tokenized {
|
||||
default_search_fields.push(f);
|
||||
}
|
||||
f
|
||||
} else {
|
||||
schema_builder.add_text_field(&field_name, text_options)
|
||||
}
|
||||
}
|
||||
FieldDef::Numeric { stored, indexed, fast, precision } => {
|
||||
match precision {
|
||||
NumericType::I64 => {
|
||||
let mut opts = tantivy::schema::NumericOptions::default();
|
||||
if *stored { opts = opts.set_stored(); }
|
||||
if *indexed { opts = opts.set_indexed(); }
|
||||
if *fast { opts = opts.set_fast(); }
|
||||
schema_builder.add_i64_field(&field_name, opts)
|
||||
}
|
||||
NumericType::U64 => {
|
||||
let mut opts = tantivy::schema::NumericOptions::default();
|
||||
if *stored { opts = opts.set_stored(); }
|
||||
if *indexed { opts = opts.set_indexed(); }
|
||||
if *fast { opts = opts.set_fast(); }
|
||||
schema_builder.add_u64_field(&field_name, opts)
|
||||
}
|
||||
NumericType::F64 => {
|
||||
let mut opts = tantivy::schema::NumericOptions::default();
|
||||
if *stored { opts = opts.set_stored(); }
|
||||
if *indexed { opts = opts.set_indexed(); }
|
||||
if *fast { opts = opts.set_fast(); }
|
||||
schema_builder.add_f64_field(&field_name, opts)
|
||||
}
|
||||
NumericType::Date => {
|
||||
let mut opts = tantivy::schema::DateOptions::default();
|
||||
if *stored { opts = opts.set_stored(); }
|
||||
if *indexed { opts = opts.set_indexed(); }
|
||||
if *fast { opts = opts.set_fast(); }
|
||||
schema_builder.add_date_field(&field_name, opts)
|
||||
}
|
||||
}
|
||||
}
|
||||
FieldDef::Tag { stored, separator: _, case_sensitive: _ } => {
|
||||
let mut text_options = TextOptions::default();
|
||||
if *stored {
|
||||
text_options = text_options.set_stored();
|
||||
}
|
||||
text_options = text_options.set_indexing_options(
|
||||
TextFieldIndexing::default()
|
||||
.set_tokenizer("raw")
|
||||
.set_index_option(tantivy::schema::IndexRecordOption::Basic)
|
||||
);
|
||||
schema_builder.add_text_field(&field_name, text_options)
|
||||
}
|
||||
FieldDef::Geo { stored } => {
|
||||
// For now, store as two f64 fields for lat/lon
|
||||
let mut opts = tantivy::schema::NumericOptions::default();
|
||||
if *stored { opts = opts.set_stored(); }
|
||||
opts = opts.set_indexed().set_fast();
|
||||
|
||||
let lat_field = schema_builder.add_f64_field(&format!("{}_lat", field_name), opts.clone());
|
||||
let lon_field = schema_builder.add_f64_field(&format!("{}_lon", field_name), opts);
|
||||
|
||||
fields.insert(format!("{}_lat", field_name), (lat_field, FieldDef::Numeric {
|
||||
stored: *stored,
|
||||
indexed: true,
|
||||
fast: true,
|
||||
precision: NumericType::F64,
|
||||
}));
|
||||
fields.insert(format!("{}_lon", field_name), (lon_field, FieldDef::Numeric {
|
||||
stored: *stored,
|
||||
indexed: true,
|
||||
fast: true,
|
||||
precision: NumericType::F64,
|
||||
}));
|
||||
continue; // Skip adding the geo field itself
|
||||
}
|
||||
};
|
||||
|
||||
fields.insert(field_name.clone(), (field, field_def));
|
||||
}
|
||||
|
||||
let schema = schema_builder.build();
|
||||
let index_schema = IndexSchema {
|
||||
schema: schema.clone(),
|
||||
fields,
|
||||
default_search_fields,
|
||||
};
|
||||
|
||||
// Create or open index
|
||||
let dir = MmapDirectory::open(&index_path)
|
||||
.map_err(|e| DBError(format!("Failed to open index directory: {}", e)))?;
|
||||
|
||||
let mut index = Index::open_or_create(dir, schema)
|
||||
.map_err(|e| DBError(format!("Failed to create index: {}", e)))?;
|
||||
|
||||
// Configure tokenizers
|
||||
let tokenizer_manager = TokenizerManager::default();
|
||||
index.set_tokenizers(tokenizer_manager);
|
||||
|
||||
let writer = index.writer(1_000_000)
|
||||
.map_err(|e| DBError(format!("Failed to create index writer: {}", e)))?;
|
||||
|
||||
let reader = index
|
||||
.reader_builder()
|
||||
.reload_policy(ReloadPolicy::OnCommitWithDelay)
|
||||
.try_into()
|
||||
.map_err(|e| DBError(format!("Failed to create reader: {}", e)))?;
|
||||
|
||||
let config = config.unwrap_or_default();
|
||||
|
||||
Ok(TantivySearch {
|
||||
index,
|
||||
writer: Arc::new(RwLock::new(writer)),
|
||||
reader,
|
||||
index_schema,
|
||||
name,
|
||||
config,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn add_document_with_fields(
|
||||
&self,
|
||||
doc_id: &str,
|
||||
fields: HashMap<String, String>,
|
||||
) -> Result<(), DBError> {
|
||||
let mut writer = self.writer.write()
|
||||
.map_err(|e| DBError(format!("Failed to acquire writer lock: {}", e)))?;
|
||||
|
||||
// Delete existing document with same ID
|
||||
if let Some((id_field, _)) = self.index_schema.fields.get("_id") {
|
||||
writer.delete_term(Term::from_field_text(*id_field, doc_id));
|
||||
}
|
||||
|
||||
// Create new document
|
||||
let mut doc = tantivy::doc!();
|
||||
|
||||
// Add document ID
|
||||
if let Some((id_field, _)) = self.index_schema.fields.get("_id") {
|
||||
doc.add_text(*id_field, doc_id);
|
||||
}
|
||||
|
||||
// Add other fields based on schema
|
||||
for (field_name, field_value) in fields {
|
||||
if let Some((field, field_def)) = self.index_schema.fields.get(&field_name) {
|
||||
match field_def {
|
||||
FieldDef::Text { .. } => {
|
||||
doc.add_text(*field, &field_value);
|
||||
}
|
||||
FieldDef::Numeric { precision, .. } => {
|
||||
match precision {
|
||||
NumericType::I64 => {
|
||||
if let Ok(v) = field_value.parse::<i64>() {
|
||||
doc.add_i64(*field, v);
|
||||
}
|
||||
}
|
||||
NumericType::U64 => {
|
||||
if let Ok(v) = field_value.parse::<u64>() {
|
||||
doc.add_u64(*field, v);
|
||||
}
|
||||
}
|
||||
NumericType::F64 => {
|
||||
if let Ok(v) = field_value.parse::<f64>() {
|
||||
doc.add_f64(*field, v);
|
||||
}
|
||||
}
|
||||
NumericType::Date => {
|
||||
if let Ok(v) = field_value.parse::<i64>() {
|
||||
doc.add_date(*field, DateTime::from_timestamp_millis(v));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
FieldDef::Tag { separator, case_sensitive, .. } => {
|
||||
let tags = if !case_sensitive {
|
||||
field_value.to_lowercase()
|
||||
} else {
|
||||
field_value.clone()
|
||||
};
|
||||
|
||||
// Store tags as separate terms for efficient filtering
|
||||
for tag in tags.split(separator.as_str()) {
|
||||
doc.add_text(*field, tag.trim());
|
||||
}
|
||||
}
|
||||
FieldDef::Geo { .. } => {
|
||||
// Parse "lat,lon" format
|
||||
let parts: Vec<&str> = field_value.split(',').collect();
|
||||
if parts.len() == 2 {
|
||||
if let (Ok(lat), Ok(lon)) = (parts[0].parse::<f64>(), parts[1].parse::<f64>()) {
|
||||
if let Some((lat_field, _)) = self.index_schema.fields.get(&format!("{}_lat", field_name)) {
|
||||
doc.add_f64(*lat_field, lat);
|
||||
}
|
||||
if let Some((lon_field, _)) = self.index_schema.fields.get(&format!("{}_lon", field_name)) {
|
||||
doc.add_f64(*lon_field, lon);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
writer.add_document(doc).map_err(|e| DBError(format!("Failed to add document: {}", e)))?;
|
||||
|
||||
writer.commit()
|
||||
.map_err(|e| DBError(format!("Failed to commit: {}", e)))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn search_with_options(
|
||||
&self,
|
||||
query_str: &str,
|
||||
options: SearchOptions,
|
||||
) -> Result<SearchResults, DBError> {
|
||||
let searcher = self.reader.searcher();
|
||||
|
||||
// Parse query based on search fields
|
||||
let query: Box<dyn Query> = if self.index_schema.default_search_fields.is_empty() {
|
||||
return Err(DBError("No searchable fields defined in schema".to_string()));
|
||||
} else {
|
||||
let query_parser = QueryParser::for_index(
|
||||
&self.index,
|
||||
self.index_schema.default_search_fields.clone(),
|
||||
);
|
||||
|
||||
Box::new(query_parser.parse_query(query_str)
|
||||
.map_err(|e| DBError(format!("Failed to parse query: {}", e)))?)
|
||||
};
|
||||
|
||||
// Apply filters if any
|
||||
let final_query = if !options.filters.is_empty() {
|
||||
let mut clauses: Vec<(Occur, Box<dyn Query>)> = vec![(Occur::Must, query)];
|
||||
|
||||
// Add filters
|
||||
for filter in options.filters {
|
||||
if let Some((field, _)) = self.index_schema.fields.get(&filter.field) {
|
||||
match filter.filter_type {
|
||||
FilterType::Equals(value) => {
|
||||
let term_query = TermQuery::new(
|
||||
Term::from_field_text(*field, &value),
|
||||
tantivy::schema::IndexRecordOption::Basic,
|
||||
);
|
||||
clauses.push((Occur::Must, Box::new(term_query)));
|
||||
}
|
||||
FilterType::Range { min: _, max: _ } => {
|
||||
// Would need numeric field handling here
|
||||
// Simplified for now
|
||||
}
|
||||
FilterType::InSet(values) => {
|
||||
let mut sub_clauses: Vec<(Occur, Box<dyn Query>)> = vec![];
|
||||
for value in values {
|
||||
let term_query = TermQuery::new(
|
||||
Term::from_field_text(*field, &value),
|
||||
tantivy::schema::IndexRecordOption::Basic,
|
||||
);
|
||||
sub_clauses.push((Occur::Should, Box::new(term_query)));
|
||||
}
|
||||
clauses.push((Occur::Must, Box::new(BooleanQuery::new(sub_clauses))));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Box::new(BooleanQuery::new(clauses))
|
||||
} else {
|
||||
query
|
||||
};
|
||||
|
||||
// Execute search
|
||||
let top_docs = searcher.search(
|
||||
&*final_query,
|
||||
&TopDocs::with_limit(options.limit + options.offset)
|
||||
).map_err(|e| DBError(format!("Search failed: {}", e)))?;
|
||||
|
||||
let total_hits = top_docs.len();
|
||||
let mut documents = Vec::new();
|
||||
|
||||
for (score, doc_address) in top_docs.iter().skip(options.offset).take(options.limit) {
|
||||
let retrieved_doc: TantivyDocument = searcher.doc(*doc_address)
|
||||
.map_err(|e| DBError(format!("Failed to retrieve doc: {}", e)))?;
|
||||
|
||||
let mut doc_fields = HashMap::new();
|
||||
|
||||
// Extract all stored fields
|
||||
for (field_name, (field, field_def)) in &self.index_schema.fields {
|
||||
match field_def {
|
||||
FieldDef::Text { stored, .. } |
|
||||
FieldDef::Tag { stored, .. } => {
|
||||
if *stored {
|
||||
if let Some(value) = retrieved_doc.get_first(*field) {
|
||||
if let Some(text) = value.as_str() {
|
||||
doc_fields.insert(field_name.clone(), text.to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
FieldDef::Numeric { stored, precision, .. } => {
|
||||
if *stored {
|
||||
let value_str = match precision {
|
||||
NumericType::I64 => {
|
||||
retrieved_doc.get_first(*field)
|
||||
.and_then(|v| v.as_i64())
|
||||
.map(|v| v.to_string())
|
||||
}
|
||||
NumericType::U64 => {
|
||||
retrieved_doc.get_first(*field)
|
||||
.and_then(|v| v.as_u64())
|
||||
.map(|v| v.to_string())
|
||||
}
|
||||
NumericType::F64 => {
|
||||
retrieved_doc.get_first(*field)
|
||||
.and_then(|v| v.as_f64())
|
||||
.map(|v| v.to_string())
|
||||
}
|
||||
NumericType::Date => {
|
||||
retrieved_doc.get_first(*field)
|
||||
.and_then(|v| v.as_datetime())
|
||||
.map(|v| v.into_timestamp_millis().to_string())
|
||||
}
|
||||
};
|
||||
|
||||
if let Some(v) = value_str {
|
||||
doc_fields.insert(field_name.clone(), v);
|
||||
}
|
||||
}
|
||||
}
|
||||
FieldDef::Geo { stored } => {
|
||||
if *stored {
|
||||
let lat_field = self.index_schema.fields.get(&format!("{}_lat", field_name)).unwrap().0;
|
||||
let lon_field = self.index_schema.fields.get(&format!("{}_lon", field_name)).unwrap().0;
|
||||
|
||||
let lat = retrieved_doc.get_first(lat_field).and_then(|v| v.as_f64());
|
||||
let lon = retrieved_doc.get_first(lon_field).and_then(|v| v.as_f64());
|
||||
|
||||
if let (Some(lat), Some(lon)) = (lat, lon) {
|
||||
doc_fields.insert(field_name.clone(), format!("{},{}", lat, lon));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
documents.push(SearchDocument {
|
||||
fields: doc_fields,
|
||||
score: *score,
|
||||
});
|
||||
}
|
||||
|
||||
Ok(SearchResults {
|
||||
total: total_hits,
|
||||
documents,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn get_info(&self) -> Result<IndexInfo, DBError> {
|
||||
let searcher = self.reader.searcher();
|
||||
let num_docs = searcher.num_docs();
|
||||
|
||||
let fields_info: Vec<FieldInfo> = self.index_schema.fields.iter().map(|(name, (_, def))| {
|
||||
FieldInfo {
|
||||
name: name.clone(),
|
||||
field_type: format!("{:?}", def),
|
||||
}
|
||||
}).collect();
|
||||
|
||||
Ok(IndexInfo {
|
||||
name: self.name.clone(),
|
||||
num_docs,
|
||||
fields: fields_info,
|
||||
config: self.config.clone(),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SearchOptions {
|
||||
pub limit: usize,
|
||||
pub offset: usize,
|
||||
pub filters: Vec<Filter>,
|
||||
pub sort_by: Option<String>,
|
||||
pub return_fields: Option<Vec<String>>,
|
||||
pub highlight: bool,
|
||||
}
|
||||
|
||||
impl Default for SearchOptions {
|
||||
fn default() -> Self {
|
||||
SearchOptions {
|
||||
limit: 10,
|
||||
offset: 0,
|
||||
filters: vec![],
|
||||
sort_by: None,
|
||||
return_fields: None,
|
||||
highlight: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct Filter {
|
||||
pub field: String,
|
||||
pub filter_type: FilterType,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum FilterType {
|
||||
Equals(String),
|
||||
Range { min: String, max: String },
|
||||
InSet(Vec<String>),
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct SearchResults {
|
||||
pub total: usize,
|
||||
pub documents: Vec<SearchDocument>,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct SearchDocument {
|
||||
pub fields: HashMap<String, String>,
|
||||
pub score: f32,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct IndexInfo {
|
||||
pub name: String,
|
||||
pub num_docs: u64,
|
||||
pub fields: Vec<FieldInfo>,
|
||||
pub config: IndexConfig,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct FieldInfo {
|
||||
pub name: String,
|
||||
pub field_type: String,
|
||||
}
|
Reference in New Issue
Block a user