add data packages and remove empty submodule

This commit is contained in:
Timur Gordon
2025-08-07 12:13:37 +02:00
parent ca736d62f3
commit d7562ce466
47 changed files with 8639 additions and 1 deletions

277
packages/data/ourdb/API.md Normal file
View File

@@ -0,0 +1,277 @@
# OurDB API Reference
This document provides a comprehensive reference for the OurDB Rust API.
## Table of Contents
1. [Configuration](#configuration)
2. [Database Operations](#database-operations)
- [Creating and Opening](#creating-and-opening)
- [Setting Data](#setting-data)
- [Getting Data](#getting-data)
- [Deleting Data](#deleting-data)
- [History Tracking](#history-tracking)
3. [Error Handling](#error-handling)
4. [Advanced Usage](#advanced-usage)
- [Custom File Size](#custom-file-size)
- [Custom Key Size](#custom-key-size)
5. [Performance Considerations](#performance-considerations)
## Configuration
### OurDBConfig
The `OurDBConfig` struct is used to configure a new OurDB instance.
```rust
pub struct OurDBConfig {
pub path: PathBuf,
pub incremental_mode: bool,
pub file_size: Option<usize>,
pub keysize: Option<u8>,
}
```
| Field | Type | Description |
|-------|------|-------------|
| `path` | `PathBuf` | Path to the database directory |
| `incremental_mode` | `bool` | Whether to use auto-incremented IDs (true) or user-provided IDs (false) |
| `file_size` | `Option<usize>` | Maximum size of each database file in bytes (default: 500MB) |
| `keysize` | `Option<u8>` | Size of keys in bytes (default: 4, valid values: 2, 3, 4, 6) |
Example:
```rust
let config = OurDBConfig {
path: PathBuf::from("/path/to/db"),
incremental_mode: true,
file_size: Some(1024 * 1024 * 100), // 100MB
keysize: Some(4), // 4-byte keys
};
```
## Database Operations
### Creating and Opening
#### `OurDB::new`
Creates a new OurDB instance or opens an existing one.
```rust
pub fn new(config: OurDBConfig) -> Result<OurDB, Error>
```
Example:
```rust
let mut db = OurDB::new(config)?;
```
### Setting Data
#### `OurDB::set`
Sets a value in the database. In incremental mode, if no ID is provided, a new ID is generated.
```rust
pub fn set(&mut self, args: OurDBSetArgs) -> Result<u32, Error>
```
The `OurDBSetArgs` struct has the following fields:
```rust
pub struct OurDBSetArgs<'a> {
pub id: Option<u32>,
pub data: &'a [u8],
}
```
Example with auto-generated ID:
```rust
let id = db.set(OurDBSetArgs {
id: None,
data: b"Hello, World!",
})?;
```
Example with explicit ID:
```rust
db.set(OurDBSetArgs {
id: Some(42),
data: b"Hello, World!",
})?;
```
### Getting Data
#### `OurDB::get`
Retrieves a value from the database by ID.
```rust
pub fn get(&mut self, id: u32) -> Result<Vec<u8>, Error>
```
Example:
```rust
let data = db.get(42)?;
```
### Deleting Data
#### `OurDB::delete`
Deletes a value from the database by ID.
```rust
pub fn delete(&mut self, id: u32) -> Result<(), Error>
```
Example:
```rust
db.delete(42)?;
```
### History Tracking
#### `OurDB::get_history`
Retrieves the history of values for a given ID, up to the specified depth.
```rust
pub fn get_history(&mut self, id: u32, depth: u8) -> Result<Vec<Vec<u8>>, Error>
```
Example:
```rust
// Get the last 5 versions of the record
let history = db.get_history(42, 5)?;
// Process each version (most recent first)
for (i, version) in history.iter().enumerate() {
println!("Version {}: {:?}", i, version);
}
```
### Other Operations
#### `OurDB::get_next_id`
Returns the next ID that will be assigned in incremental mode.
```rust
pub fn get_next_id(&self) -> Result<u32, Error>
```
Example:
```rust
let next_id = db.get_next_id()?;
```
#### `OurDB::close`
Closes the database, ensuring all data is flushed to disk.
```rust
pub fn close(&mut self) -> Result<(), Error>
```
Example:
```rust
db.close()?;
```
#### `OurDB::destroy`
Closes the database and deletes all database files.
```rust
pub fn destroy(&mut self) -> Result<(), Error>
```
Example:
```rust
db.destroy()?;
```
## Error Handling
OurDB uses the `thiserror` crate to define error types. The main error type is `ourdb::Error`.
```rust
pub enum Error {
IoError(std::io::Error),
InvalidKeySize,
InvalidId,
RecordNotFound,
InvalidCrc,
NotIncrementalMode,
DatabaseClosed,
// ...
}
```
All OurDB operations that can fail return a `Result<T, Error>` which can be handled using Rust's standard error handling mechanisms.
Example:
```rust
match db.get(42) {
Ok(data) => println!("Found data: {:?}", data),
Err(ourdb::Error::RecordNotFound) => println!("Record not found"),
Err(e) => eprintln!("Error: {}", e),
}
```
## Advanced Usage
### Custom File Size
You can configure the maximum size of each database file:
```rust
let config = OurDBConfig {
path: PathBuf::from("/path/to/db"),
incremental_mode: true,
file_size: Some(1024 * 1024 * 10), // 10MB per file
keysize: None,
};
```
Smaller file sizes can be useful for:
- Limiting memory usage when reading files
- Improving performance on systems with limited memory
- Easier backup and file management
### Custom Key Size
OurDB supports different key sizes (2, 3, 4, or 6 bytes):
```rust
let config = OurDBConfig {
path: PathBuf::from("/path/to/db"),
incremental_mode: true,
file_size: None,
keysize: Some(6), // 6-byte keys
};
```
Key size considerations:
- 2 bytes: Up to 65,536 records
- 3 bytes: Up to 16,777,216 records
- 4 bytes: Up to 4,294,967,296 records (default)
- 6 bytes: Up to 281,474,976,710,656 records
## Performance Considerations
For optimal performance:
1. **Choose appropriate key size**: Use the smallest key size that can accommodate your expected number of records.
2. **Configure file size**: For large databases, consider using smaller file sizes to improve memory usage.
3. **Batch operations**: When inserting or updating many records, consider batching operations to minimize disk I/O.
4. **Close properly**: Always call `close()` when you're done with the database to ensure data is properly flushed to disk.
5. **Reuse OurDB instance**: Creating a new OurDB instance has overhead, so reuse the same instance for multiple operations when possible.
6. **Consider memory usage**: The lookup table is loaded into memory, so very large databases may require significant RAM.

View File

@@ -0,0 +1,32 @@
[package]
name = "ourdb"
version = "0.1.0"
edition = "2021"
description = "A lightweight, efficient key-value database with history tracking capabilities"
authors = ["OurWorld Team"]
[dependencies]
crc32fast = "1.3.2"
thiserror = "1.0.40"
log = "0.4.17"
rand = "0.8.5"
[dev-dependencies]
criterion = "0.5.1"
tempfile = "3.8.0"
# [[bench]]
# name = "ourdb_benchmarks"
# harness = false
[[example]]
name = "basic_usage"
path = "examples/basic_usage.rs"
[[example]]
name = "advanced_usage"
path = "examples/advanced_usage.rs"
[[example]]
name = "benchmark"
path = "examples/benchmark.rs"

View File

@@ -0,0 +1,135 @@
# OurDB
OurDB is a lightweight, efficient key-value database implementation that provides data persistence with history tracking capabilities. This Rust implementation offers a robust and performant solution for applications requiring simple but reliable data storage.
## Features
- Simple key-value storage with history tracking
- Data integrity verification using CRC32
- Support for multiple backend files for large datasets
- Lookup table for fast data retrieval
- Incremental mode for auto-generated IDs
- Memory and disk-based lookup tables
## Limitations
- Maximum data size per entry is 65,535 bytes (~64KB) due to the 2-byte size field in the record header
## Usage
### Basic Example
```rust
use ourdb::{OurDB, OurDBConfig, OurDBSetArgs};
use std::path::PathBuf;
fn main() -> Result<(), ourdb::Error> {
// Create a new database
let config = OurDBConfig {
path: PathBuf::from("/tmp/ourdb"),
incremental_mode: true,
file_size: None, // Use default (500MB)
keysize: None, // Use default (4 bytes)
};
let mut db = OurDB::new(config)?;
// Store data (with auto-generated ID in incremental mode)
let data = b"Hello, OurDB!";
let id = db.set(OurDBSetArgs { id: None, data })?;
println!("Stored data with ID: {}", id);
// Retrieve data
let retrieved = db.get(id)?;
println!("Retrieved: {}", String::from_utf8_lossy(&retrieved));
// Update data
let updated_data = b"Updated data";
db.set(OurDBSetArgs { id: Some(id), data: updated_data })?;
// Get history (returns most recent first)
let history = db.get_history(id, 2)?;
for (i, entry) in history.iter().enumerate() {
println!("History {}: {}", i, String::from_utf8_lossy(entry));
}
// Delete data
db.delete(id)?;
// Close the database
db.close()?;
Ok(())
}
```
### Key-Value Mode vs Incremental Mode
OurDB supports two operating modes:
1. **Key-Value Mode** (`incremental_mode: false`): You must provide IDs explicitly when storing data.
2. **Incremental Mode** (`incremental_mode: true`): IDs are auto-generated when not provided.
### Configuration Options
- `path`: Directory for database storage
- `incremental_mode`: Whether to use auto-increment mode
- `file_size`: Maximum file size (default: 500MB)
- `keysize`: Size of lookup table entries (2-6 bytes)
- 2: For databases with < 65,536 records
- 3: For databases with < 16,777,216 records
- 4: For databases with < 4,294,967,296 records (default)
- 6: For large databases requiring multiple files
## Architecture
OurDB consists of three main components:
1. **Frontend API**: Provides the public interface for database operations
2. **Lookup Table**: Maps keys to physical locations in the backend storage
3. **Backend Storage**: Manages the actual data persistence in files
### Record Format
Each record in the backend storage includes:
- 2 bytes: Data size
- 4 bytes: CRC32 checksum
- 6 bytes: Previous record location (for history)
- N bytes: Actual data
## Documentation
Additional documentation is available in the repository:
- [API Reference](API.md): Detailed API documentation
- [Migration Guide](MIGRATION.md): Guide for migrating from the V implementation
- [Architecture](architecture.md): Design and implementation details
## Examples
The repository includes several examples to demonstrate OurDB usage:
- `basic_usage.rs`: Simple operations with OurDB
- `advanced_usage.rs`: More complex features including both operation modes
- `benchmark.rs`: Performance benchmarking tool
Run an example with:
```bash
cargo run --example basic_usage
cargo run --example advanced_usage
cargo run --example benchmark
```
## Performance
OurDB is designed for efficiency and minimal overhead. The benchmark example can be used to evaluate performance on your specific hardware and workload.
Typical performance metrics on modern hardware:
- **Write**: 10,000+ operations per second
- **Read**: 50,000+ operations per second
## License
This project is licensed under the MIT License.

View File

@@ -0,0 +1,439 @@
# OurDB: Architecture for V to Rust Port
## 1. Overview
OurDB is a lightweight, efficient key-value database implementation that provides data persistence with history tracking capabilities. This document outlines the architecture for porting OurDB from its original V implementation to Rust, maintaining all existing functionality while leveraging Rust's memory safety, performance, and ecosystem.
## 2. Current Architecture (V Implementation)
The current V implementation of OurDB consists of three main components in a layered architecture:
```mermaid
graph TD
A[Client Code] --> B[Frontend API]
B --> C[Lookup Table]
B --> D[Backend Storage]
C --> D
```
### 2.1 Frontend (db.v)
The frontend provides the public API for database operations and coordinates between the lookup table and backend storage components.
Key responsibilities:
- Exposing high-level operations (set, get, delete, history)
- Managing incremental ID generation in auto-increment mode
- Coordinating data flow between lookup and backend components
- Handling database lifecycle (open, close, destroy)
### 2.2 Lookup Table (lookup.v)
The lookup table maps keys to physical locations in the backend storage.
Key responsibilities:
- Maintaining key-to-location mapping
- Optimizing key sizes based on database configuration
- Supporting both memory and disk-based lookup tables
- Handling sparse data efficiently
- Providing next ID generation for incremental mode
### 2.3 Backend Storage (backend.v)
The backend storage manages the actual data persistence in files.
Key responsibilities:
- Managing physical data storage in files
- Ensuring data integrity with CRC32 checksums
- Supporting multiple file backends for large datasets
- Implementing low-level read/write operations
- Tracking record history through linked locations
### 2.4 Core Data Structures
#### OurDB
```v
@[heap]
pub struct OurDB {
mut:
lookup &LookupTable
pub:
path string // directory for storage
incremental_mode bool
file_size u32 = 500 * (1 << 20) // 500MB
pub mut:
file os.File
file_nr u16 // the file which is open
last_used_file_nr u16
}
```
#### LookupTable
```v
pub struct LookupTable {
keysize u8
lookuppath string
mut:
data []u8
incremental ?u32 // points to next empty slot if incremental mode is enabled
}
```
#### Location
```v
pub struct Location {
pub mut:
file_nr u16
position u32
}
```
### 2.5 Storage Format
#### Record Format
Each record in the backend storage includes:
- 2 bytes: Data size
- 4 bytes: CRC32 checksum
- 6 bytes: Previous record location (for history)
- N bytes: Actual data
#### Lookup Table Optimization
The lookup table automatically optimizes its key size based on the database configuration:
- 2 bytes: For databases with < 65,536 records
- 3 bytes: For databases with < 16,777,216 records
- 4 bytes: For databases with < 4,294,967,296 records
- 6 bytes: For large databases requiring multiple files
## 3. Proposed Rust Architecture
The Rust implementation will maintain the same layered architecture while leveraging Rust's type system, ownership model, and error handling.
```mermaid
graph TD
A[Client Code] --> B[OurDB API]
B --> C[LookupTable]
B --> D[Backend]
C --> D
E[Error Handling] --> B
E --> C
E --> D
F[Configuration] --> B
```
### 3.1 Core Components
#### 3.1.1 OurDB (API Layer)
```rust
pub struct OurDB {
path: String,
incremental_mode: bool,
file_size: u32,
lookup: LookupTable,
file: Option<std::fs::File>,
file_nr: u16,
last_used_file_nr: u16,
}
impl OurDB {
pub fn new(config: OurDBConfig) -> Result<Self, Error>;
pub fn set(&mut self, id: Option<u32>, data: &[u8]) -> Result<u32, Error>;
pub fn get(&mut self, id: u32) -> Result<Vec<u8>, Error>;
pub fn get_history(&mut self, id: u32, depth: u8) -> Result<Vec<Vec<u8>>, Error>;
pub fn delete(&mut self, id: u32) -> Result<(), Error>;
pub fn get_next_id(&mut self) -> Result<u32, Error>;
pub fn close(&mut self) -> Result<(), Error>;
pub fn destroy(&mut self) -> Result<(), Error>;
}
```
#### 3.1.2 LookupTable
```rust
pub struct LookupTable {
keysize: u8,
lookuppath: String,
data: Vec<u8>,
incremental: Option<u32>,
}
impl LookupTable {
fn new(config: LookupConfig) -> Result<Self, Error>;
fn get(&self, id: u32) -> Result<Location, Error>;
fn set(&mut self, id: u32, location: Location) -> Result<(), Error>;
fn delete(&mut self, id: u32) -> Result<(), Error>;
fn get_next_id(&self) -> Result<u32, Error>;
fn increment_index(&mut self) -> Result<(), Error>;
fn export_data(&self, path: &str) -> Result<(), Error>;
fn import_data(&mut self, path: &str) -> Result<(), Error>;
fn export_sparse(&self, path: &str) -> Result<(), Error>;
fn import_sparse(&mut self, path: &str) -> Result<(), Error>;
}
```
#### 3.1.3 Location
```rust
pub struct Location {
file_nr: u16,
position: u32,
}
impl Location {
fn new(bytes: &[u8], keysize: u8) -> Result<Self, Error>;
fn to_bytes(&self) -> Result<Vec<u8>, Error>;
fn to_u64(&self) -> u64;
}
```
#### 3.1.4 Backend
The backend functionality will be implemented as methods on the OurDB struct:
```rust
impl OurDB {
fn db_file_select(&mut self, file_nr: u16) -> Result<(), Error>;
fn create_new_db_file(&mut self, file_nr: u16) -> Result<(), Error>;
fn get_file_nr(&mut self) -> Result<u16, Error>;
fn set_(&mut self, id: u32, old_location: Location, data: &[u8]) -> Result<(), Error>;
fn get_(&mut self, location: Location) -> Result<Vec<u8>, Error>;
fn get_prev_pos_(&mut self, location: Location) -> Result<Location, Error>;
fn delete_(&mut self, id: u32, location: Location) -> Result<(), Error>;
fn close_(&mut self);
}
```
#### 3.1.5 Configuration
```rust
pub struct OurDBConfig {
pub record_nr_max: u32,
pub record_size_max: u32,
pub file_size: u32,
pub path: String,
pub incremental_mode: bool,
pub reset: bool,
}
struct LookupConfig {
size: u32,
keysize: u8,
lookuppath: String,
incremental_mode: bool,
}
```
#### 3.1.6 Error Handling
```rust
#[derive(Debug, thiserror::Error)]
pub enum Error {
#[error("I/O error: {0}")]
Io(#[from] std::io::Error),
#[error("Invalid key size: {0}")]
InvalidKeySize(u8),
#[error("Record not found: {0}")]
RecordNotFound(u32),
#[error("Data corruption: CRC mismatch")]
DataCorruption,
#[error("Index out of bounds: {0}")]
IndexOutOfBounds(u32),
#[error("Incremental mode not enabled")]
IncrementalNotEnabled,
#[error("Lookup table is full")]
LookupTableFull,
#[error("Invalid file number: {0}")]
InvalidFileNumber(u16),
#[error("Invalid operation: {0}")]
InvalidOperation(String),
}
```
## 4. Implementation Strategy
### 4.1 Phase 1: Core Data Structures
1. Implement the `Location` struct with serialization/deserialization
2. Implement the `Error` enum for error handling
3. Implement the configuration structures
### 4.2 Phase 2: Lookup Table
1. Implement the `LookupTable` struct with memory-based storage
2. Add disk-based storage support
3. Implement key size optimization
4. Add incremental ID support
5. Implement import/export functionality
### 4.3 Phase 3: Backend Storage
1. Implement file management functions
2. Implement record serialization/deserialization with CRC32
3. Implement history tracking through linked locations
4. Add support for multiple backend files
### 4.4 Phase 4: Frontend API
1. Implement the `OurDB` struct with core operations
2. Add high-level API methods (set, get, delete, history)
3. Implement database lifecycle management
### 4.5 Phase 5: Testing and Optimization
1. Port existing tests from V to Rust
2. Add new tests for Rust-specific functionality
3. Benchmark and optimize performance
4. Ensure compatibility with existing OurDB files
## 5. Implementation Considerations
### 5.1 Memory Management
Leverage Rust's ownership model for safe and efficient memory management:
- Use `Vec<u8>` for data buffers instead of raw pointers
- Implement proper RAII for file handles
- Use references and borrows to avoid unnecessary copying
- Consider using `Bytes` from the `bytes` crate for zero-copy operations
### 5.2 Error Handling
Use Rust's `Result` type for comprehensive error handling:
- Define custom error types for OurDB-specific errors
- Propagate errors using the `?` operator
- Provide detailed error messages
- Implement proper error conversion using the `From` trait
### 5.3 File I/O
Optimize file operations for performance:
- Use `BufReader` and `BufWriter` for buffered I/O
- Implement proper file locking for concurrent access
- Consider memory-mapped files for lookup tables
- Use `seek` and `read_exact` for precise positioning
### 5.4 Concurrency
Consider thread safety for concurrent database access:
- Use interior mutability patterns where appropriate
- Implement `Send` and `Sync` traits for thread safety
- Consider using `RwLock` for shared read access
- Provide clear documentation on thread safety guarantees
### 5.5 Performance Optimizations
Identify opportunities for performance improvements:
- Use memory-mapped files for lookup tables
- Implement caching for frequently accessed records
- Use zero-copy operations where possible
- Consider async I/O for non-blocking operations
## 6. Testing Strategy
### 6.1 Unit Tests
Write comprehensive unit tests for each component:
- Test `Location` serialization/deserialization
- Test `LookupTable` operations
- Test backend storage functions
- Test error handling
### 6.2 Integration Tests
Write integration tests for the complete system:
- Test database creation and configuration
- Test basic CRUD operations
- Test history tracking
- Test incremental ID generation
- Test file management
### 6.3 Compatibility Tests
Ensure compatibility with existing OurDB files:
- Test reading existing V-created OurDB files
- Test writing files that can be read by the V implementation
- Test migration scenarios
### 6.4 Performance Tests
Benchmark performance against the V implementation:
- Measure throughput for set/get operations
- Measure latency for different operations
- Test with different database sizes
- Test with different record sizes
## 7. Project Structure
```
ourdb/
├── Cargo.toml
├── src/
│ ├── lib.rs # Public API and re-exports
│ ├── ourdb.rs # OurDB implementation (frontend)
│ ├── lookup.rs # Lookup table implementation
│ ├── location.rs # Location struct implementation
│ ├── backend.rs # Backend storage implementation
│ ├── error.rs # Error types
│ ├── config.rs # Configuration structures
│ └── utils.rs # Utility functions
├── tests/
│ ├── unit/ # Unit tests
│ ├── integration/ # Integration tests
│ └── compatibility/ # Compatibility tests
└── examples/
├── basic.rs # Basic usage example
├── history.rs # History tracking example
└── client_server.rs # Client-server example
```
## 8. Dependencies
The Rust implementation will use the following dependencies:
- `thiserror` for error handling
- `crc32fast` for CRC32 calculation
- `bytes` for efficient byte manipulation
- `memmap2` for memory-mapped files (optional)
- `serde` for serialization (optional, for future extensions)
- `log` for logging
- `criterion` for benchmarking
## 9. Compatibility Considerations
To ensure compatibility with the V implementation:
1. Maintain the same file format for data storage
2. Preserve the lookup table format
3. Keep the same CRC32 calculation method
4. Ensure identical behavior for incremental ID generation
5. Maintain the same history tracking mechanism
## 10. Future Extensions
Potential future extensions to consider:
1. Async API for non-blocking operations
2. Transactions support
3. Better concurrency control
4. Compression support
5. Encryption support
6. Streaming API for large values
7. Iterators for scanning records
8. Secondary indexes
## 11. Conclusion
This architecture provides a roadmap for porting OurDB from V to Rust while maintaining compatibility and leveraging Rust's strengths. The implementation will follow a phased approach, starting with core data structures and gradually building up to the complete system.
The Rust implementation aims to be:
- **Safe**: Leveraging Rust's ownership model for memory safety
- **Fast**: Maintaining or improving performance compared to V
- **Compatible**: Working with existing OurDB files
- **Extensible**: Providing a foundation for future enhancements
- **Well-tested**: Including comprehensive test coverage

View File

@@ -0,0 +1,231 @@
use ourdb::{OurDB, OurDBConfig, OurDBSetArgs};
use std::path::PathBuf;
use std::time::Instant;
fn main() -> Result<(), ourdb::Error> {
// Create a temporary directory for the database
let db_path = std::env::temp_dir().join("ourdb_advanced_example");
std::fs::create_dir_all(&db_path)?;
println!("Creating database at: {}", db_path.display());
// Demonstrate key-value mode (non-incremental)
key_value_mode_example(&db_path)?;
// Demonstrate incremental mode
incremental_mode_example(&db_path)?;
// Demonstrate performance benchmarking
performance_benchmark(&db_path)?;
// Clean up (optional)
if std::env::var("KEEP_DB").is_err() {
std::fs::remove_dir_all(&db_path)?;
println!("Cleaned up database directory");
} else {
println!("Database kept at: {}", db_path.display());
}
Ok(())
}
fn key_value_mode_example(base_path: &PathBuf) -> Result<(), ourdb::Error> {
println!("\n=== Key-Value Mode Example ===");
let db_path = base_path.join("key_value");
std::fs::create_dir_all(&db_path)?;
// Create a new database with key-value mode (non-incremental)
let config = OurDBConfig {
path: db_path,
incremental_mode: false,
file_size: Some(1024 * 1024), // 1MB for testing
keysize: Some(2), // Small key size for demonstration
reset: None, // Don't reset existing database
};
let mut db = OurDB::new(config)?;
// In key-value mode, we must provide IDs explicitly
let custom_ids = [100, 200, 300, 400, 500];
// Store data with custom IDs
for (i, &id) in custom_ids.iter().enumerate() {
let data = format!("Record with custom ID {}", id);
db.set(OurDBSetArgs {
id: Some(id),
data: data.as_bytes(),
})?;
println!("Stored record {} with custom ID: {}", i + 1, id);
}
// Retrieve data by custom IDs
for &id in &custom_ids {
let retrieved = db.get(id)?;
println!(
"Retrieved ID {}: {}",
id,
String::from_utf8_lossy(&retrieved)
);
}
// Update and track history
let id_to_update = custom_ids[2]; // ID 300
for i in 1..=3 {
let updated_data = format!("Updated record {} (version {})", id_to_update, i);
db.set(OurDBSetArgs {
id: Some(id_to_update),
data: updated_data.as_bytes(),
})?;
println!("Updated ID {} (version {})", id_to_update, i);
}
// Get history for the updated record
let history = db.get_history(id_to_update, 5)?;
println!("History for ID {} (most recent first):", id_to_update);
for (i, entry) in history.iter().enumerate() {
println!(" Version {}: {}", i, String::from_utf8_lossy(entry));
}
db.close()?;
println!("Key-value mode example completed");
Ok(())
}
fn incremental_mode_example(base_path: &PathBuf) -> Result<(), ourdb::Error> {
println!("\n=== Incremental Mode Example ===");
let db_path = base_path.join("incremental");
std::fs::create_dir_all(&db_path)?;
// Create a new database with incremental mode
let config = OurDBConfig {
path: db_path,
incremental_mode: true,
file_size: Some(1024 * 1024), // 1MB for testing
keysize: Some(3), // 3-byte keys
reset: None, // Don't reset existing database
};
let mut db = OurDB::new(config)?;
// In incremental mode, IDs are auto-generated
let mut assigned_ids = Vec::new();
// Store multiple records and collect assigned IDs
for i in 1..=5 {
let data = format!("Auto-increment record {}", i);
let id = db.set(OurDBSetArgs {
id: None,
data: data.as_bytes(),
})?;
assigned_ids.push(id);
println!("Stored record {} with auto-assigned ID: {}", i, id);
}
// Check next ID
let next_id = db.get_next_id()?;
println!("Next ID to be assigned: {}", next_id);
// Retrieve all records
for &id in &assigned_ids {
let retrieved = db.get(id)?;
println!(
"Retrieved ID {}: {}",
id,
String::from_utf8_lossy(&retrieved)
);
}
db.close()?;
println!("Incremental mode example completed");
Ok(())
}
fn performance_benchmark(base_path: &PathBuf) -> Result<(), ourdb::Error> {
println!("\n=== Performance Benchmark ===");
let db_path = base_path.join("benchmark");
std::fs::create_dir_all(&db_path)?;
// Create a new database
let config = OurDBConfig {
path: db_path,
incremental_mode: true,
file_size: Some(1024 * 1024), // 10MB
keysize: Some(4), // 4-byte keys
reset: None, // Don't reset existing database
};
let mut db = OurDB::new(config)?;
// Number of operations for the benchmark
let num_operations = 1000;
let data_size = 100; // bytes per record
// Prepare test data
let test_data = vec![b'A'; data_size];
// Benchmark write operations
println!("Benchmarking {} write operations...", num_operations);
let start = Instant::now();
let mut ids = Vec::with_capacity(num_operations);
for _ in 0..num_operations {
let id = db.set(OurDBSetArgs {
id: None,
data: &test_data,
})?;
ids.push(id);
}
let write_duration = start.elapsed();
let writes_per_second = num_operations as f64 / write_duration.as_secs_f64();
println!(
"Write performance: {:.2} ops/sec ({:.2} ms/op)",
writes_per_second,
write_duration.as_secs_f64() * 1000.0 / num_operations as f64
);
// Benchmark read operations
println!("Benchmarking {} read operations...", num_operations);
let start = Instant::now();
for &id in &ids {
let _ = db.get(id)?;
}
let read_duration = start.elapsed();
let reads_per_second = num_operations as f64 / read_duration.as_secs_f64();
println!(
"Read performance: {:.2} ops/sec ({:.2} ms/op)",
reads_per_second,
read_duration.as_secs_f64() * 1000.0 / num_operations as f64
);
// Benchmark update operations
println!("Benchmarking {} update operations...", num_operations);
let start = Instant::now();
for &id in &ids {
db.set(OurDBSetArgs {
id: Some(id),
data: &test_data,
})?;
}
let update_duration = start.elapsed();
let updates_per_second = num_operations as f64 / update_duration.as_secs_f64();
println!(
"Update performance: {:.2} ops/sec ({:.2} ms/op)",
updates_per_second,
update_duration.as_secs_f64() * 1000.0 / num_operations as f64
);
db.close()?;
println!("Performance benchmark completed");
Ok(())
}

View File

@@ -0,0 +1,89 @@
use ourdb::{OurDB, OurDBConfig, OurDBSetArgs};
fn main() -> Result<(), ourdb::Error> {
// Create a temporary directory for the database
let db_path = std::env::temp_dir().join("ourdb_example");
std::fs::create_dir_all(&db_path)?;
println!("Creating database at: {}", db_path.display());
// Create a new database with incremental mode enabled
let config = OurDBConfig {
path: db_path.clone(),
incremental_mode: true,
file_size: None, // Use default (500MB)
keysize: None, // Use default (4 bytes)
reset: None, // Don't reset existing database
};
let mut db = OurDB::new(config)?;
// Store some data with auto-generated IDs
let data1 = b"First record";
let id1 = db.set(OurDBSetArgs {
id: None,
data: data1,
})?;
println!("Stored first record with ID: {}", id1);
let data2 = b"Second record";
let id2 = db.set(OurDBSetArgs {
id: None,
data: data2,
})?;
println!("Stored second record with ID: {}", id2);
// Retrieve and print the data
let retrieved1 = db.get(id1)?;
println!(
"Retrieved ID {}: {}",
id1,
String::from_utf8_lossy(&retrieved1)
);
let retrieved2 = db.get(id2)?;
println!(
"Retrieved ID {}: {}",
id2,
String::from_utf8_lossy(&retrieved2)
);
// Update a record to demonstrate history tracking
let updated_data = b"Updated first record";
db.set(OurDBSetArgs {
id: Some(id1),
data: updated_data,
})?;
println!("Updated record with ID: {}", id1);
// Get history for the updated record
let history = db.get_history(id1, 2)?;
println!("History for ID {}:", id1);
for (i, entry) in history.iter().enumerate() {
println!(" Version {}: {}", i, String::from_utf8_lossy(entry));
}
// Delete a record
db.delete(id2)?;
println!("Deleted record with ID: {}", id2);
// Verify deletion
match db.get(id2) {
Ok(_) => println!("Record still exists (unexpected)"),
Err(e) => println!("Verified deletion: {}", e),
}
// Close the database
db.close()?;
println!("Database closed successfully");
// Clean up (optional)
if std::env::var("KEEP_DB").is_err() {
std::fs::remove_dir_all(&db_path)?;
println!("Cleaned up database directory");
} else {
println!("Database kept at: {}", db_path.display());
}
Ok(())
}

View File

@@ -0,0 +1,124 @@
use ourdb::{OurDB, OurDBConfig, OurDBSetArgs};
use std::time::Instant;
fn main() -> Result<(), ourdb::Error> {
// Parse command-line arguments
let args: Vec<String> = std::env::args().collect();
// Default values
let mut incremental_mode = true;
let mut keysize: u8 = 4;
let mut num_operations = 10000;
// Parse arguments
for i in 1..args.len() {
if args[i] == "--no-incremental" {
incremental_mode = false;
} else if args[i] == "--keysize" && i + 1 < args.len() {
keysize = args[i + 1].parse().unwrap_or(4);
} else if args[i] == "--ops" && i + 1 < args.len() {
num_operations = args[i + 1].parse().unwrap_or(10000);
}
}
// Create a temporary directory for the database
let db_path = std::env::temp_dir().join("ourdb_benchmark");
std::fs::create_dir_all(&db_path)?;
println!("Database path: {}", db_path.display());
// Create a new database
let config = OurDBConfig {
path: db_path.clone(),
incremental_mode,
file_size: Some(1024 * 1024),
keysize: Some(keysize),
reset: Some(true), // Reset the database for benchmarking
};
let mut db = OurDB::new(config)?;
// Prepare test data (100 bytes per record)
let test_data = vec![b'A'; 100];
// Benchmark write operations
println!(
"Benchmarking {} write operations (incremental: {}, keysize: {})...",
num_operations, incremental_mode, keysize
);
let start = Instant::now();
let mut ids = Vec::with_capacity(num_operations);
for _ in 0..num_operations {
let id = if incremental_mode {
db.set(OurDBSetArgs {
id: None,
data: &test_data,
})?
} else {
// In non-incremental mode, we need to provide IDs
let id = ids.len() as u32 + 1;
db.set(OurDBSetArgs {
id: Some(id),
data: &test_data,
})?;
id
};
ids.push(id);
}
let write_duration = start.elapsed();
let writes_per_second = num_operations as f64 / write_duration.as_secs_f64();
println!(
"Write performance: {:.2} ops/sec ({:.2} ms/op)",
writes_per_second,
write_duration.as_secs_f64() * 1000.0 / num_operations as f64
);
// Benchmark read operations
println!("Benchmarking {} read operations...", num_operations);
let start = Instant::now();
for &id in &ids {
let _ = db.get(id)?;
}
let read_duration = start.elapsed();
let reads_per_second = num_operations as f64 / read_duration.as_secs_f64();
println!(
"Read performance: {:.2} ops/sec ({:.2} ms/op)",
reads_per_second,
read_duration.as_secs_f64() * 1000.0 / num_operations as f64
);
// Benchmark update operations
println!("Benchmarking {} update operations...", num_operations);
let start = Instant::now();
for &id in &ids {
db.set(OurDBSetArgs {
id: Some(id),
data: &test_data,
})?;
}
let update_duration = start.elapsed();
let updates_per_second = num_operations as f64 / update_duration.as_secs_f64();
println!(
"Update performance: {:.2} ops/sec ({:.2} ms/op)",
updates_per_second,
update_duration.as_secs_f64() * 1000.0 / num_operations as f64
);
// Clean up
db.close()?;
std::fs::remove_dir_all(&db_path)?;
Ok(())
}

View File

@@ -0,0 +1,83 @@
use ourdb::{OurDB, OurDBConfig, OurDBSetArgs};
use std::env::temp_dir;
use std::time::{SystemTime, UNIX_EPOCH};
fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("Standalone OurDB Example");
println!("=======================\n");
// Create a temporary directory for the database
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
let db_path = temp_dir().join(format!("ourdb_example_{}", timestamp));
std::fs::create_dir_all(&db_path)?;
println!("Creating database at: {}", db_path.display());
// Create a new OurDB instance
let config = OurDBConfig {
path: db_path.clone(),
incremental_mode: true,
file_size: None,
keysize: None,
reset: Some(false),
};
let mut db = OurDB::new(config)?;
println!("Database created successfully");
// Store some data
let test_data = b"Hello, OurDB!";
let id = db.set(OurDBSetArgs {
id: None,
data: test_data,
})?;
println!("\nStored data with ID: {}", id);
// Retrieve the data
let retrieved = db.get(id)?;
println!("Retrieved data: {}", String::from_utf8_lossy(&retrieved));
// Update the data
let updated_data = b"Updated data in OurDB!";
db.set(OurDBSetArgs {
id: Some(id),
data: updated_data,
})?;
println!("\nUpdated data with ID: {}", id);
// Retrieve the updated data
let retrieved = db.get(id)?;
println!(
"Retrieved updated data: {}",
String::from_utf8_lossy(&retrieved)
);
// Get history
let history = db.get_history(id, 2)?;
println!("\nHistory for ID {}:", id);
for (i, data) in history.iter().enumerate() {
println!(" Version {}: {}", i + 1, String::from_utf8_lossy(data));
}
// Delete the data
db.delete(id)?;
println!("\nDeleted data with ID: {}", id);
// Try to retrieve the deleted data (should fail)
match db.get(id) {
Ok(_) => println!("Data still exists (unexpected)"),
Err(e) => println!("Verified deletion: {}", e),
}
println!("\nExample completed successfully!");
// Clean up
db.close()?;
std::fs::remove_dir_all(&db_path)?;
println!("Cleaned up database directory");
Ok(())
}

View File

@@ -0,0 +1,83 @@
use ourdb::{OurDB, OurDBConfig, OurDBSetArgs};
use std::env::temp_dir;
use std::time::{SystemTime, UNIX_EPOCH};
fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("Standalone OurDB Example");
println!("=======================\n");
// Create a temporary directory for the database
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
let db_path = temp_dir().join(format!("ourdb_example_{}", timestamp));
std::fs::create_dir_all(&db_path)?;
println!("Creating database at: {}", db_path.display());
// Create a new OurDB instance
let config = OurDBConfig {
path: db_path.clone(),
incremental_mode: true,
file_size: None,
keysize: None,
reset: Some(false),
};
let mut db = OurDB::new(config)?;
println!("Database created successfully");
// Store some data
let test_data = b"Hello, OurDB!";
let id = db.set(OurDBSetArgs {
id: None,
data: test_data,
})?;
println!("\nStored data with ID: {}", id);
// Retrieve the data
let retrieved = db.get(id)?;
println!("Retrieved data: {}", String::from_utf8_lossy(&retrieved));
// Update the data
let updated_data = b"Updated data in OurDB!";
db.set(OurDBSetArgs {
id: Some(id),
data: updated_data,
})?;
println!("\nUpdated data with ID: {}", id);
// Retrieve the updated data
let retrieved = db.get(id)?;
println!(
"Retrieved updated data: {}",
String::from_utf8_lossy(&retrieved)
);
// Get history
let history = db.get_history(id, 2)?;
println!("\nHistory for ID {}:", id);
for (i, data) in history.iter().enumerate() {
println!(" Version {}: {}", i + 1, String::from_utf8_lossy(data));
}
// Delete the data
db.delete(id)?;
println!("\nDeleted data with ID: {}", id);
// Try to retrieve the deleted data (should fail)
match db.get(id) {
Ok(_) => println!("Data still exists (unexpected)"),
Err(e) => println!("Verified deletion: {}", e),
}
println!("\nExample completed successfully!");
// Clean up
db.close()?;
std::fs::remove_dir_all(&db_path)?;
println!("Cleaned up database directory");
Ok(())
}

View File

@@ -0,0 +1,366 @@
use std::fs::{self, File, OpenOptions};
use std::io::{Read, Seek, SeekFrom, Write};
use crc32fast::Hasher;
use crate::error::Error;
use crate::location::Location;
use crate::OurDB;
// Header size: 2 bytes (size) + 4 bytes (CRC32) + 6 bytes (previous location)
pub const HEADER_SIZE: usize = 12;
impl OurDB {
/// Selects and opens a database file for read/write operations
pub(crate) fn db_file_select(&mut self, file_nr: u16) -> Result<(), Error> {
// No need to check if file_nr > 65535 as u16 can't exceed that value
let path = self.path.join(format!("{}.db", file_nr));
// Always close the current file if it's open
self.file = None;
// Create file if it doesn't exist
if !path.exists() {
self.create_new_db_file(file_nr)?;
}
// Open the file fresh
let file = OpenOptions::new().read(true).write(true).open(&path)?;
self.file = Some(file);
self.file_nr = file_nr;
Ok(())
}
/// Creates a new database file
pub(crate) fn create_new_db_file(&mut self, file_nr: u16) -> Result<(), Error> {
let new_file_path = self.path.join(format!("{}.db", file_nr));
let mut file = File::create(&new_file_path)?;
// Write a single byte to make all positions start from 1
file.write_all(&[0u8])?;
Ok(())
}
/// Gets the file number to use for the next write operation
pub(crate) fn get_file_nr(&mut self) -> Result<u16, Error> {
// For keysize 2, 3, or 4, we can only use file_nr 0
if self.lookup.keysize() <= 4 {
let path = self.path.join("0.db");
if !path.exists() {
self.create_new_db_file(0)?;
}
return Ok(0);
}
// For keysize 6, we can use multiple files
let path = self.path.join(format!("{}.db", self.last_used_file_nr));
if !path.exists() {
self.create_new_db_file(self.last_used_file_nr)?;
return Ok(self.last_used_file_nr);
}
let metadata = fs::metadata(&path)?;
if metadata.len() >= self.file_size as u64 {
self.last_used_file_nr += 1;
self.create_new_db_file(self.last_used_file_nr)?;
}
Ok(self.last_used_file_nr)
}
/// Stores data at the specified ID with history tracking
pub(crate) fn set_(
&mut self,
id: u32,
old_location: Location,
data: &[u8],
) -> Result<(), Error> {
// Validate data size - maximum is u16::MAX (65535 bytes or ~64KB)
if data.len() > u16::MAX as usize {
return Err(Error::InvalidOperation(format!(
"Data size exceeds maximum allowed size of {} bytes",
u16::MAX
)));
}
// Get file number to use
let file_nr = self.get_file_nr()?;
// Select the file
self.db_file_select(file_nr)?;
// Get current file position for lookup
let file = self
.file
.as_mut()
.ok_or_else(|| Error::Other("No file open".to_string()))?;
file.seek(SeekFrom::End(0))?;
let position = file.stream_position()? as u32;
// Create new location
let new_location = Location { file_nr, position };
// Calculate CRC of data
let crc = calculate_crc(data);
// Create header
let mut header = vec![0u8; HEADER_SIZE];
// Write size (2 bytes)
let size = data.len() as u16; // Safe now because we've validated the size
header[0] = (size & 0xFF) as u8;
header[1] = ((size >> 8) & 0xFF) as u8;
// Write CRC (4 bytes)
header[2] = (crc & 0xFF) as u8;
header[3] = ((crc >> 8) & 0xFF) as u8;
header[4] = ((crc >> 16) & 0xFF) as u8;
header[5] = ((crc >> 24) & 0xFF) as u8;
// Write previous location (6 bytes)
let prev_bytes = old_location.to_bytes();
for (i, &byte) in prev_bytes.iter().enumerate().take(6) {
header[6 + i] = byte;
}
// Write header
file.write_all(&header)?;
// Write actual data
file.write_all(data)?;
file.flush()?;
// Update lookup table with new position
self.lookup.set(id, new_location)?;
Ok(())
}
/// Retrieves data at the specified location
pub(crate) fn get_(&mut self, location: Location) -> Result<Vec<u8>, Error> {
if location.position == 0 {
return Err(Error::NotFound(format!(
"Record not found, location: {:?}",
location
)));
}
// Select the file
self.db_file_select(location.file_nr)?;
let file = self
.file
.as_mut()
.ok_or_else(|| Error::Other("No file open".to_string()))?;
// Read header
file.seek(SeekFrom::Start(location.position as u64))?;
let mut header = vec![0u8; HEADER_SIZE];
file.read_exact(&mut header)?;
// Parse size (2 bytes)
let size = u16::from(header[0]) | (u16::from(header[1]) << 8);
// Parse CRC (4 bytes)
let stored_crc = u32::from(header[2])
| (u32::from(header[3]) << 8)
| (u32::from(header[4]) << 16)
| (u32::from(header[5]) << 24);
// Read data
let mut data = vec![0u8; size as usize];
file.read_exact(&mut data)?;
// Verify CRC
let calculated_crc = calculate_crc(&data);
if calculated_crc != stored_crc {
return Err(Error::DataCorruption(
"CRC mismatch: data corruption detected".to_string(),
));
}
Ok(data)
}
/// Retrieves the previous position for a record (for history tracking)
pub(crate) fn get_prev_pos_(&mut self, location: Location) -> Result<Location, Error> {
if location.position == 0 {
return Err(Error::NotFound("Record not found".to_string()));
}
// Select the file
self.db_file_select(location.file_nr)?;
let file = self
.file
.as_mut()
.ok_or_else(|| Error::Other("No file open".to_string()))?;
// Skip size and CRC (6 bytes)
file.seek(SeekFrom::Start(location.position as u64 + 6))?;
// Read previous location (6 bytes)
let mut prev_bytes = vec![0u8; 6];
file.read_exact(&mut prev_bytes)?;
// Create location from bytes
Location::from_bytes(&prev_bytes, 6)
}
/// Deletes the record at the specified location
pub(crate) fn delete_(&mut self, id: u32, location: Location) -> Result<(), Error> {
if location.position == 0 {
return Err(Error::NotFound("Record not found".to_string()));
}
// Select the file
self.db_file_select(location.file_nr)?;
let file = self
.file
.as_mut()
.ok_or_else(|| Error::Other("No file open".to_string()))?;
// Read size first
file.seek(SeekFrom::Start(location.position as u64))?;
let mut size_bytes = vec![0u8; 2];
file.read_exact(&mut size_bytes)?;
let size = u16::from(size_bytes[0]) | (u16::from(size_bytes[1]) << 8);
// Write zeros for the entire record (header + data)
let zeros = vec![0u8; HEADER_SIZE + size as usize];
file.seek(SeekFrom::Start(location.position as u64))?;
file.write_all(&zeros)?;
// Clear lookup entry
self.lookup.delete(id)?;
Ok(())
}
/// Condenses the database by removing empty records and updating positions
pub fn condense(&mut self) -> Result<(), Error> {
// Create a temporary directory
let temp_path = self.path.join("temp");
fs::create_dir_all(&temp_path)?;
// Get all file numbers
let mut file_numbers = Vec::new();
for entry in fs::read_dir(&self.path)? {
let entry = entry?;
let path = entry.path();
if path.is_file() && path.extension().map_or(false, |ext| ext == "db") {
if let Some(stem) = path.file_stem() {
if let Ok(file_nr) = stem.to_string_lossy().parse::<u16>() {
file_numbers.push(file_nr);
}
}
}
}
// Process each file
for file_nr in file_numbers {
let src_path = self.path.join(format!("{}.db", file_nr));
let temp_file_path = temp_path.join(format!("{}.db", file_nr));
// Create new file
let mut temp_file = File::create(&temp_file_path)?;
temp_file.write_all(&[0u8])?; // Initialize with a byte
// Open source file
let mut src_file = File::open(&src_path)?;
// Read and process records
let mut buffer = vec![0u8; 1024]; // Read in chunks
let mut _position = 0;
while let Ok(bytes_read) = src_file.read(&mut buffer) {
if bytes_read == 0 {
break;
}
// Process the chunk
// This is a simplified version - in a real implementation,
// you would need to handle records that span chunk boundaries
_position += bytes_read;
}
// TODO: Implement proper record copying and position updating
// This would involve:
// 1. Reading each record from the source file
// 2. If not deleted (all zeros), copy to temp file
// 3. Update lookup table with new positions
}
// TODO: Replace original files with temp files
// Clean up
fs::remove_dir_all(&temp_path)?;
Ok(())
}
}
/// Calculates CRC32 for the data
fn calculate_crc(data: &[u8]) -> u32 {
let mut hasher = Hasher::new();
hasher.update(data);
hasher.finalize()
}
#[cfg(test)]
mod tests {
use std::path::PathBuf;
use crate::{OurDB, OurDBConfig, OurDBSetArgs};
use std::env::temp_dir;
use std::time::{SystemTime, UNIX_EPOCH};
fn get_temp_dir() -> PathBuf {
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
temp_dir().join(format!("ourdb_backend_test_{}", timestamp))
}
#[test]
fn test_backend_operations() {
let temp_dir = get_temp_dir();
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: false,
file_size: None,
keysize: None,
reset: None, // Don't reset existing database
};
let mut db = OurDB::new(config).unwrap();
// Test set and get
let test_data = b"Test data for backend operations";
let id = 1;
db.set(OurDBSetArgs {
id: Some(id),
data: test_data,
})
.unwrap();
let retrieved = db.get(id).unwrap();
assert_eq!(retrieved, test_data);
// Clean up
db.destroy().unwrap();
}
}

View File

@@ -0,0 +1,41 @@
use thiserror::Error;
/// Error types for OurDB operations
#[derive(Error, Debug)]
pub enum Error {
/// IO errors from file operations
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
/// Data corruption errors
#[error("Data corruption: {0}")]
DataCorruption(String),
/// Invalid operation errors
#[error("Invalid operation: {0}")]
InvalidOperation(String),
/// Lookup table errors
#[error("Lookup error: {0}")]
LookupError(String),
/// Record not found errors
#[error("Record not found: {0}")]
NotFound(String),
/// Other errors
#[error("Error: {0}")]
Other(String),
}
impl From<String> for Error {
fn from(msg: String) -> Self {
Error::Other(msg)
}
}
impl From<&str> for Error {
fn from(msg: &str) -> Self {
Error::Other(msg.to_string())
}
}

View File

@@ -0,0 +1,293 @@
mod backend;
mod error;
mod location;
mod lookup;
pub use error::Error;
pub use location::Location;
pub use lookup::LookupTable;
use std::fs::File;
use std::path::PathBuf;
/// OurDB is a lightweight, efficient key-value database implementation that provides
/// data persistence with history tracking capabilities.
pub struct OurDB {
/// Directory path for storage
path: PathBuf,
/// Whether to use auto-increment mode
incremental_mode: bool,
/// Maximum file size (default: 500MB)
file_size: u32,
/// Lookup table for mapping keys to locations
lookup: LookupTable,
/// Currently open file
file: Option<File>,
/// Current file number
file_nr: u16,
/// Last used file number
last_used_file_nr: u16,
}
/// Configuration for creating a new OurDB instance
pub struct OurDBConfig {
/// Directory path for storage
pub path: PathBuf,
/// Whether to use auto-increment mode
pub incremental_mode: bool,
/// Maximum file size (default: 500MB)
pub file_size: Option<u32>,
/// Lookup table key size (default: 4)
/// - 2: For databases with < 65,536 records (single file)
/// - 3: For databases with < 16,777,216 records (single file)
/// - 4: For databases with < 4,294,967,296 records (single file)
/// - 6: For large databases requiring multiple files (default)
pub keysize: Option<u8>,
/// Whether to reset the database if it exists (default: false)
pub reset: Option<bool>,
}
/// Arguments for setting a value in OurDB
pub struct OurDBSetArgs<'a> {
/// ID for the record (optional in incremental mode)
pub id: Option<u32>,
/// Data to store
pub data: &'a [u8],
}
impl OurDB {
/// Creates a new OurDB instance with the given configuration
pub fn new(config: OurDBConfig) -> Result<Self, Error> {
// If reset is true and the path exists, remove it first
if config.reset.unwrap_or(false) && config.path.exists() {
std::fs::remove_dir_all(&config.path)?;
}
// Create directory if it doesn't exist
std::fs::create_dir_all(&config.path)?;
// Create lookup table
let lookup_path = config.path.join("lookup");
std::fs::create_dir_all(&lookup_path)?;
let lookup_config = lookup::LookupConfig {
size: 1000000, // Default size
keysize: config.keysize.unwrap_or(4),
lookuppath: lookup_path.to_string_lossy().to_string(),
incremental_mode: config.incremental_mode,
};
let lookup = LookupTable::new(lookup_config)?;
let mut db = OurDB {
path: config.path,
incremental_mode: config.incremental_mode,
file_size: config.file_size.unwrap_or(500 * (1 << 20)), // 500MB default
lookup,
file: None,
file_nr: 0,
last_used_file_nr: 0,
};
// Load existing metadata if available
db.load()?;
Ok(db)
}
/// Sets a value in the database
///
/// In incremental mode:
/// - If ID is provided, it updates an existing record
/// - If ID is not provided, it creates a new record with auto-generated ID
///
/// In key-value mode:
/// - ID must be provided
pub fn set(&mut self, args: OurDBSetArgs) -> Result<u32, Error> {
if self.incremental_mode {
if let Some(id) = args.id {
// This is an update
let location = self.lookup.get(id)?;
if location.position == 0 {
return Err(Error::InvalidOperation(
"Cannot set ID for insertions when incremental mode is enabled".to_string(),
));
}
self.set_(id, location, args.data)?;
Ok(id)
} else {
// This is an insert
let id = self.lookup.get_next_id()?;
self.set_(id, Location::default(), args.data)?;
Ok(id)
}
} else {
// Using key-value mode
let id = args.id.ok_or_else(|| {
Error::InvalidOperation(
"ID must be provided when incremental is disabled".to_string(),
)
})?;
let location = self.lookup.get(id)?;
self.set_(id, location, args.data)?;
Ok(id)
}
}
/// Retrieves data stored at the specified key position
pub fn get(&mut self, id: u32) -> Result<Vec<u8>, Error> {
let location = self.lookup.get(id)?;
self.get_(location)
}
/// Retrieves a list of previous values for the specified key
///
/// The depth parameter controls how many historical values to retrieve (maximum)
pub fn get_history(&mut self, id: u32, depth: u8) -> Result<Vec<Vec<u8>>, Error> {
let mut result = Vec::new();
let mut current_location = self.lookup.get(id)?;
// Traverse the history chain up to specified depth
for _ in 0..depth {
// Get current value
let data = self.get_(current_location)?;
result.push(data);
// Try to get previous location
match self.get_prev_pos_(current_location) {
Ok(location) => {
if location.position == 0 {
break;
}
current_location = location;
}
Err(_) => break,
}
}
Ok(result)
}
/// Deletes the data at the specified key position
pub fn delete(&mut self, id: u32) -> Result<(), Error> {
let location = self.lookup.get(id)?;
self.delete_(id, location)?;
self.lookup.delete(id)?;
Ok(())
}
/// Returns the next ID which will be used when storing in incremental mode
pub fn get_next_id(&mut self) -> Result<u32, Error> {
if !self.incremental_mode {
return Err(Error::InvalidOperation(
"Incremental mode is not enabled".to_string(),
));
}
self.lookup.get_next_id()
}
/// Closes the database, ensuring all data is saved
pub fn close(&mut self) -> Result<(), Error> {
self.save()?;
self.close_();
Ok(())
}
/// Destroys the database, removing all files
pub fn destroy(&mut self) -> Result<(), Error> {
let _ = self.close();
std::fs::remove_dir_all(&self.path)?;
Ok(())
}
// Helper methods
fn lookup_dump_path(&self) -> PathBuf {
self.path.join("lookup_dump.db")
}
fn load(&mut self) -> Result<(), Error> {
let dump_path = self.lookup_dump_path();
if dump_path.exists() {
self.lookup.import_sparse(&dump_path.to_string_lossy())?;
}
Ok(())
}
fn save(&mut self) -> Result<(), Error> {
self.lookup
.export_sparse(&self.lookup_dump_path().to_string_lossy())?;
Ok(())
}
fn close_(&mut self) {
self.file = None;
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::env::temp_dir;
use std::time::{SystemTime, UNIX_EPOCH};
fn get_temp_dir() -> PathBuf {
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
temp_dir().join(format!("ourdb_test_{}", timestamp))
}
#[test]
fn test_basic_operations() {
let temp_dir = get_temp_dir();
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: true,
file_size: None,
keysize: None,
reset: None, // Don't reset existing database
};
let mut db = OurDB::new(config).unwrap();
// Test set and get
let test_data = b"Hello, OurDB!";
let id = db
.set(OurDBSetArgs {
id: None,
data: test_data,
})
.unwrap();
let retrieved = db.get(id).unwrap();
assert_eq!(retrieved, test_data);
// Test update
let updated_data = b"Updated data";
db.set(OurDBSetArgs {
id: Some(id),
data: updated_data,
})
.unwrap();
let retrieved = db.get(id).unwrap();
assert_eq!(retrieved, updated_data);
// Test history
let history = db.get_history(id, 2).unwrap();
assert_eq!(history.len(), 2);
assert_eq!(history[0], updated_data);
assert_eq!(history[1], test_data);
// Test delete
db.delete(id).unwrap();
assert!(db.get(id).is_err());
// Clean up
db.destroy().unwrap();
}
}

View File

@@ -0,0 +1,178 @@
use crate::error::Error;
/// Location represents a physical position in a database file
///
/// It consists of a file number and a position within that file.
/// This allows OurDB to span multiple files for large datasets.
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]
pub struct Location {
/// File number (0-65535)
pub file_nr: u16,
/// Position within the file
pub position: u32,
}
impl Location {
/// Creates a new Location from bytes based on keysize
///
/// - keysize = 2: Only position (2 bytes), file_nr = 0
/// - keysize = 3: Only position (3 bytes), file_nr = 0
/// - keysize = 4: Only position (4 bytes), file_nr = 0
/// - keysize = 6: file_nr (2 bytes) + position (4 bytes)
pub fn from_bytes(bytes: &[u8], keysize: u8) -> Result<Self, Error> {
// Validate keysize
if ![2, 3, 4, 6].contains(&keysize) {
return Err(Error::InvalidOperation(format!(
"Invalid keysize: {}",
keysize
)));
}
// Create padded bytes
let mut padded = vec![0u8; keysize as usize];
if bytes.len() > keysize as usize {
return Err(Error::InvalidOperation(
"Input bytes exceed keysize".to_string(),
));
}
let start_idx = keysize as usize - bytes.len();
for (i, &b) in bytes.iter().enumerate() {
if i + start_idx < padded.len() {
padded[start_idx + i] = b;
}
}
let mut location = Location::default();
match keysize {
2 => {
// Only position, 2 bytes big endian
location.position = u32::from(padded[0]) << 8 | u32::from(padded[1]);
location.file_nr = 0;
// Verify limits
if location.position > 0xFFFF {
return Err(Error::InvalidOperation(
"Position exceeds max value for keysize=2 (max 65535)".to_string(),
));
}
}
3 => {
// Only position, 3 bytes big endian
location.position =
u32::from(padded[0]) << 16 | u32::from(padded[1]) << 8 | u32::from(padded[2]);
location.file_nr = 0;
// Verify limits
if location.position > 0xFFFFFF {
return Err(Error::InvalidOperation(
"Position exceeds max value for keysize=3 (max 16777215)".to_string(),
));
}
}
4 => {
// Only position, 4 bytes big endian
location.position = u32::from(padded[0]) << 24
| u32::from(padded[1]) << 16
| u32::from(padded[2]) << 8
| u32::from(padded[3]);
location.file_nr = 0;
}
6 => {
// 2 bytes file_nr + 4 bytes position, all big endian
location.file_nr = u16::from(padded[0]) << 8 | u16::from(padded[1]);
location.position = u32::from(padded[2]) << 24
| u32::from(padded[3]) << 16
| u32::from(padded[4]) << 8
| u32::from(padded[5]);
}
_ => unreachable!(),
}
Ok(location)
}
/// Converts the location to bytes (always 6 bytes)
///
/// Format: [file_nr (2 bytes)][position (4 bytes)]
pub fn to_bytes(&self) -> Vec<u8> {
let mut bytes = Vec::with_capacity(6);
// Put file_nr first (2 bytes)
bytes.push((self.file_nr >> 8) as u8);
bytes.push(self.file_nr as u8);
// Put position next (4 bytes)
bytes.push((self.position >> 24) as u8);
bytes.push((self.position >> 16) as u8);
bytes.push((self.position >> 8) as u8);
bytes.push(self.position as u8);
bytes
}
/// Converts the location to a u64 value
///
/// The file_nr is stored in the most significant bits
pub fn to_u64(&self) -> u64 {
(u64::from(self.file_nr) << 32) | u64::from(self.position)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_location_from_bytes_keysize_2() {
let bytes = vec![0x12, 0x34];
let location = Location::from_bytes(&bytes, 2).unwrap();
assert_eq!(location.file_nr, 0);
assert_eq!(location.position, 0x1234);
}
#[test]
fn test_location_from_bytes_keysize_3() {
let bytes = vec![0x12, 0x34, 0x56];
let location = Location::from_bytes(&bytes, 3).unwrap();
assert_eq!(location.file_nr, 0);
assert_eq!(location.position, 0x123456);
}
#[test]
fn test_location_from_bytes_keysize_4() {
let bytes = vec![0x12, 0x34, 0x56, 0x78];
let location = Location::from_bytes(&bytes, 4).unwrap();
assert_eq!(location.file_nr, 0);
assert_eq!(location.position, 0x12345678);
}
#[test]
fn test_location_from_bytes_keysize_6() {
let bytes = vec![0xAB, 0xCD, 0x12, 0x34, 0x56, 0x78];
let location = Location::from_bytes(&bytes, 6).unwrap();
assert_eq!(location.file_nr, 0xABCD);
assert_eq!(location.position, 0x12345678);
}
#[test]
fn test_location_to_bytes() {
let location = Location {
file_nr: 0xABCD,
position: 0x12345678,
};
let bytes = location.to_bytes();
assert_eq!(bytes, vec![0xAB, 0xCD, 0x12, 0x34, 0x56, 0x78]);
}
#[test]
fn test_location_to_u64() {
let location = Location {
file_nr: 0xABCD,
position: 0x12345678,
};
let value = location.to_u64();
assert_eq!(value, 0xABCD_0000_0000 | 0x12345678);
}
}

View File

@@ -0,0 +1,540 @@
use std::fs::{self, File, OpenOptions};
use std::io::{Read, Seek, SeekFrom, Write};
use std::path::Path;
use crate::error::Error;
use crate::location::Location;
const DATA_FILE_NAME: &str = "data";
const INCREMENTAL_FILE_NAME: &str = ".inc";
/// Configuration for creating a new lookup table
pub struct LookupConfig {
/// Size of the lookup table
pub size: u32,
/// Size of each entry in bytes (2-6)
/// - 2: For databases with < 65,536 records (single file)
/// - 3: For databases with < 16,777,216 records (single file)
/// - 4: For databases with < 4,294,967,296 records (single file)
/// - 6: For large databases requiring multiple files
pub keysize: u8,
/// Path for disk-based lookup
pub lookuppath: String,
/// Whether to use incremental mode
pub incremental_mode: bool,
}
/// Lookup table maps keys to physical locations in the backend storage
pub struct LookupTable {
/// Size of each entry in bytes (2-6)
keysize: u8,
/// Path for disk-based lookup
lookuppath: String,
/// In-memory data for memory-based lookup
data: Vec<u8>,
/// Next empty slot if incremental mode is enabled
incremental: Option<u32>,
}
impl LookupTable {
/// Returns the keysize of this lookup table
pub fn keysize(&self) -> u8 {
self.keysize
}
/// Creates a new lookup table with the given configuration
pub fn new(config: LookupConfig) -> Result<Self, Error> {
// Verify keysize is valid
if ![2, 3, 4, 6].contains(&config.keysize) {
return Err(Error::InvalidOperation(format!(
"Invalid keysize: {}",
config.keysize
)));
}
let incremental = if config.incremental_mode {
Some(get_incremental_info(&config)?)
} else {
None
};
if !config.lookuppath.is_empty() {
// Create directory if it doesn't exist
fs::create_dir_all(&config.lookuppath)?;
// For disk-based lookup, create empty file if it doesn't exist
let data_path = Path::new(&config.lookuppath).join(DATA_FILE_NAME);
if !data_path.exists() {
let data = vec![0u8; config.size as usize * config.keysize as usize];
fs::write(&data_path, &data)?;
}
Ok(LookupTable {
data: Vec::new(),
keysize: config.keysize,
lookuppath: config.lookuppath,
incremental,
})
} else {
// For memory-based lookup
Ok(LookupTable {
data: vec![0u8; config.size as usize * config.keysize as usize],
keysize: config.keysize,
lookuppath: String::new(),
incremental,
})
}
}
/// Gets a location for the given ID
pub fn get(&self, id: u32) -> Result<Location, Error> {
let entry_size = self.keysize as usize;
if !self.lookuppath.is_empty() {
// Disk-based lookup
let data_path = Path::new(&self.lookuppath).join(DATA_FILE_NAME);
// Check file size first
let file_size = fs::metadata(&data_path)?.len();
let start_pos = id as u64 * entry_size as u64;
if start_pos + entry_size as u64 > file_size {
return Err(Error::LookupError(format!(
"Invalid read for get in lut: {}: {} would exceed file size {}",
self.lookuppath,
start_pos + entry_size as u64,
file_size
)));
}
// Read directly from file
let mut file = File::open(&data_path)?;
file.seek(SeekFrom::Start(start_pos))?;
let mut data = vec![0u8; entry_size];
let bytes_read = file.read(&mut data)?;
if bytes_read < entry_size {
return Err(Error::LookupError(format!(
"Incomplete read: expected {} bytes but got {}",
entry_size, bytes_read
)));
}
return Location::from_bytes(&data, self.keysize);
}
// Memory-based lookup
if (id * self.keysize as u32) as usize >= self.data.len() {
return Err(Error::LookupError("Index out of bounds".to_string()));
}
let start = (id * self.keysize as u32) as usize;
let end = start + entry_size;
Location::from_bytes(&self.data[start..end], self.keysize)
}
/// Sets a location for the given ID
pub fn set(&mut self, id: u32, location: Location) -> Result<(), Error> {
let entry_size = self.keysize as usize;
// Handle incremental mode
if let Some(incremental) = self.incremental {
if id == incremental {
self.increment_index()?;
}
if id > incremental {
return Err(Error::InvalidOperation(
"Cannot set ID for insertions when incremental mode is enabled".to_string(),
));
}
}
// Convert location to bytes based on keysize
let location_bytes = match self.keysize {
2 => {
if location.file_nr != 0 {
return Err(Error::InvalidOperation(
"file_nr must be 0 for keysize=2".to_string(),
));
}
if location.position > 0xFFFF {
return Err(Error::InvalidOperation(
"position exceeds max value for keysize=2 (max 65535)".to_string(),
));
}
vec![(location.position >> 8) as u8, location.position as u8]
}
3 => {
if location.file_nr != 0 {
return Err(Error::InvalidOperation(
"file_nr must be 0 for keysize=3".to_string(),
));
}
if location.position > 0xFFFFFF {
return Err(Error::InvalidOperation(
"position exceeds max value for keysize=3 (max 16777215)".to_string(),
));
}
vec![
(location.position >> 16) as u8,
(location.position >> 8) as u8,
location.position as u8,
]
}
4 => {
if location.file_nr != 0 {
return Err(Error::InvalidOperation(
"file_nr must be 0 for keysize=4".to_string(),
));
}
vec![
(location.position >> 24) as u8,
(location.position >> 16) as u8,
(location.position >> 8) as u8,
location.position as u8,
]
}
6 => {
// Full location with file_nr and position
location.to_bytes()
}
_ => {
return Err(Error::InvalidOperation(format!(
"Invalid keysize: {}",
self.keysize
)))
}
};
if !self.lookuppath.is_empty() {
// Disk-based lookup
let data_path = Path::new(&self.lookuppath).join(DATA_FILE_NAME);
let mut file = OpenOptions::new().write(true).open(data_path)?;
let start_pos = id as u64 * entry_size as u64;
file.seek(SeekFrom::Start(start_pos))?;
file.write_all(&location_bytes)?;
} else {
// Memory-based lookup
let start = (id * self.keysize as u32) as usize;
if start + entry_size > self.data.len() {
return Err(Error::LookupError("Index out of bounds".to_string()));
}
for (i, &byte) in location_bytes.iter().enumerate() {
self.data[start + i] = byte;
}
}
Ok(())
}
/// Deletes an entry for the given ID
pub fn delete(&mut self, id: u32) -> Result<(), Error> {
// Set location to all zeros
self.set(id, Location::default())
}
/// Gets the next available ID in incremental mode
pub fn get_next_id(&self) -> Result<u32, Error> {
let incremental = self.incremental.ok_or_else(|| {
Error::InvalidOperation("Lookup table not in incremental mode".to_string())
})?;
let table_size = if !self.lookuppath.is_empty() {
let data_path = Path::new(&self.lookuppath).join(DATA_FILE_NAME);
fs::metadata(data_path)?.len() as u32
} else {
self.data.len() as u32
};
if incremental * self.keysize as u32 >= table_size {
return Err(Error::LookupError("Lookup table is full".to_string()));
}
Ok(incremental)
}
/// Increments the index in incremental mode
pub fn increment_index(&mut self) -> Result<(), Error> {
let mut incremental = self.incremental.ok_or_else(|| {
Error::InvalidOperation("Lookup table not in incremental mode".to_string())
})?;
incremental += 1;
self.incremental = Some(incremental);
if !self.lookuppath.is_empty() {
let inc_path = Path::new(&self.lookuppath).join(INCREMENTAL_FILE_NAME);
fs::write(inc_path, incremental.to_string())?;
}
Ok(())
}
/// Exports the lookup table to a file
pub fn export_data(&self, path: &str) -> Result<(), Error> {
if !self.lookuppath.is_empty() {
// For disk-based lookup, just copy the file
let data_path = Path::new(&self.lookuppath).join(DATA_FILE_NAME);
fs::copy(data_path, path)?;
} else {
// For memory-based lookup, write the data to file
fs::write(path, &self.data)?;
}
Ok(())
}
/// Imports the lookup table from a file
pub fn import_data(&mut self, path: &str) -> Result<(), Error> {
if !self.lookuppath.is_empty() {
// For disk-based lookup, copy the file
let data_path = Path::new(&self.lookuppath).join(DATA_FILE_NAME);
fs::copy(path, data_path)?;
} else {
// For memory-based lookup, read the data from file
self.data = fs::read(path)?;
}
Ok(())
}
/// Exports only non-zero entries to save space
pub fn export_sparse(&self, path: &str) -> Result<(), Error> {
let mut output = Vec::new();
let entry_size = self.keysize as usize;
if !self.lookuppath.is_empty() {
// For disk-based lookup
let data_path = Path::new(&self.lookuppath).join(DATA_FILE_NAME);
let mut file = File::open(&data_path)?;
let file_size = fs::metadata(&data_path)?.len();
let max_entries = file_size / entry_size as u64;
for id in 0..max_entries {
file.seek(SeekFrom::Start(id * entry_size as u64))?;
let mut buffer = vec![0u8; entry_size];
let bytes_read = file.read(&mut buffer)?;
if bytes_read < entry_size {
break;
}
// Check if entry is non-zero
if buffer.iter().any(|&b| b != 0) {
// Write ID (4 bytes) + entry
output.extend_from_slice(&(id as u32).to_be_bytes());
output.extend_from_slice(&buffer);
}
}
} else {
// For memory-based lookup
let max_entries = self.data.len() / entry_size;
for id in 0..max_entries {
let start = id * entry_size;
let entry = &self.data[start..start + entry_size];
// Check if entry is non-zero
if entry.iter().any(|&b| b != 0) {
// Write ID (4 bytes) + entry
output.extend_from_slice(&(id as u32).to_be_bytes());
output.extend_from_slice(entry);
}
}
}
// Write the output to file
fs::write(path, &output)?;
Ok(())
}
/// Imports sparse data (only non-zero entries)
pub fn import_sparse(&mut self, path: &str) -> Result<(), Error> {
let data = fs::read(path)?;
let entry_size = self.keysize as usize;
let record_size = 4 + entry_size; // ID (4 bytes) + entry
if data.len() % record_size != 0 {
return Err(Error::DataCorruption(
"Invalid sparse data format: size mismatch".to_string(),
));
}
for chunk_start in (0..data.len()).step_by(record_size) {
if chunk_start + record_size > data.len() {
break;
}
// Extract ID (4 bytes)
let id_bytes = &data[chunk_start..chunk_start + 4];
let id = u32::from_be_bytes([id_bytes[0], id_bytes[1], id_bytes[2], id_bytes[3]]);
// Extract entry
let entry = &data[chunk_start + 4..chunk_start + record_size];
// Create location from entry
let location = Location::from_bytes(entry, self.keysize)?;
// Set the entry
self.set(id, location)?;
}
Ok(())
}
/// Finds the highest ID with a non-zero entry
pub fn find_last_entry(&mut self) -> Result<u32, Error> {
let mut last_id = 0u32;
let entry_size = self.keysize as usize;
if !self.lookuppath.is_empty() {
// For disk-based lookup
let data_path = Path::new(&self.lookuppath).join(DATA_FILE_NAME);
let mut file = File::open(&data_path)?;
let file_size = fs::metadata(&data_path)?.len();
let mut buffer = vec![0u8; entry_size];
let mut pos = 0u32;
while (pos as u64 * entry_size as u64) < file_size {
file.seek(SeekFrom::Start(pos as u64 * entry_size as u64))?;
let bytes_read = file.read(&mut buffer)?;
if bytes_read == 0 || bytes_read < entry_size {
break;
}
let location = Location::from_bytes(&buffer, self.keysize)?;
if location.position != 0 || location.file_nr != 0 {
last_id = pos;
}
pos += 1;
}
} else {
// For memory-based lookup
for i in 0..(self.data.len() / entry_size) as u32 {
if let Ok(location) = self.get(i) {
if location.position != 0 || location.file_nr != 0 {
last_id = i;
}
}
}
}
Ok(last_id)
}
}
/// Helper function to get the incremental value
fn get_incremental_info(config: &LookupConfig) -> Result<u32, Error> {
if !config.incremental_mode {
return Ok(0);
}
if !config.lookuppath.is_empty() {
let inc_path = Path::new(&config.lookuppath).join(INCREMENTAL_FILE_NAME);
if !inc_path.exists() {
// Create a separate file for storing the incremental value
fs::write(&inc_path, "1")?;
}
let inc_str = fs::read_to_string(&inc_path)?;
let incremental = match inc_str.trim().parse::<u32>() {
Ok(val) => val,
Err(_) => {
// If the value is invalid, reset it to 1
fs::write(&inc_path, "1")?;
1
}
};
Ok(incremental)
} else {
// For memory-based lookup, start with 1
Ok(1)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::env::temp_dir;
use std::path::PathBuf;
use std::time::{SystemTime, UNIX_EPOCH};
fn get_temp_dir() -> PathBuf {
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
temp_dir().join(format!("ourdb_lookup_test_{}", timestamp))
}
#[test]
fn test_memory_lookup() {
let config = LookupConfig {
size: 1000,
keysize: 4,
lookuppath: String::new(),
incremental_mode: true,
};
let mut lookup = LookupTable::new(config).unwrap();
// Test set and get
let location = Location {
file_nr: 0,
position: 12345,
};
lookup.set(1, location).unwrap();
let retrieved = lookup.get(1).unwrap();
assert_eq!(retrieved.file_nr, location.file_nr);
assert_eq!(retrieved.position, location.position);
// Test incremental mode
let next_id = lookup.get_next_id().unwrap();
assert_eq!(next_id, 2);
lookup.increment_index().unwrap();
let next_id = lookup.get_next_id().unwrap();
assert_eq!(next_id, 3);
}
#[test]
fn test_disk_lookup() {
let temp_dir = get_temp_dir();
fs::create_dir_all(&temp_dir).unwrap();
let config = LookupConfig {
size: 1000,
keysize: 4,
lookuppath: temp_dir.to_string_lossy().to_string(),
incremental_mode: true,
};
let mut lookup = LookupTable::new(config).unwrap();
// Test set and get
let location = Location {
file_nr: 0,
position: 12345,
};
lookup.set(1, location).unwrap();
let retrieved = lookup.get(1).unwrap();
assert_eq!(retrieved.file_nr, location.file_nr);
assert_eq!(retrieved.position, location.position);
// Clean up
fs::remove_dir_all(temp_dir).unwrap();
}
}

View File

@@ -0,0 +1,369 @@
use ourdb::{OurDB, OurDBConfig, OurDBSetArgs};
use rand;
use std::env::temp_dir;
use std::fs;
use std::path::PathBuf;
use std::time::{SystemTime, UNIX_EPOCH};
// Helper function to create a unique temporary directory for tests
fn get_temp_dir() -> PathBuf {
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_nanos();
let random_part = rand::random::<u32>();
let dir = temp_dir().join(format!("ourdb_test_{}_{}", timestamp, random_part));
// Ensure the directory exists and is empty
if dir.exists() {
std::fs::remove_dir_all(&dir).unwrap();
}
std::fs::create_dir_all(&dir).unwrap();
dir
}
#[test]
fn test_basic_operations() {
let temp_dir = get_temp_dir();
// Create a new database with incremental mode
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: true,
file_size: None,
keysize: None,
reset: None,
};
let mut db = OurDB::new(config).unwrap();
// Test set and get
let test_data = b"Hello, OurDB!";
let id = db
.set(OurDBSetArgs {
id: None,
data: test_data,
})
.unwrap();
let retrieved = db.get(id).unwrap();
assert_eq!(retrieved, test_data);
// Test update
let updated_data = b"Updated data";
db.set(OurDBSetArgs {
id: Some(id),
data: updated_data,
})
.unwrap();
let retrieved = db.get(id).unwrap();
assert_eq!(retrieved, updated_data);
// Test history
let history = db.get_history(id, 2).unwrap();
assert_eq!(history.len(), 2);
assert_eq!(history[0], updated_data);
assert_eq!(history[1], test_data);
// Test delete
db.delete(id).unwrap();
assert!(db.get(id).is_err());
// Clean up
db.destroy().unwrap();
}
#[test]
fn test_key_value_mode() {
let temp_dir = get_temp_dir();
// Create a new database with key-value mode
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: false,
file_size: None,
keysize: None,
reset: None,
};
let mut db = OurDB::new(config).unwrap();
// Test set with explicit ID
let test_data = b"Key-value data";
let id = 42;
db.set(OurDBSetArgs {
id: Some(id),
data: test_data,
})
.unwrap();
let retrieved = db.get(id).unwrap();
assert_eq!(retrieved, test_data);
// Verify next_id fails in key-value mode
assert!(db.get_next_id().is_err());
// Clean up
db.destroy().unwrap();
}
#[test]
fn test_incremental_mode() {
let temp_dir = get_temp_dir();
// Create a new database with incremental mode
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: true,
file_size: None,
keysize: None,
reset: None,
};
let mut db = OurDB::new(config).unwrap();
// Test auto-increment IDs
let data1 = b"First record";
let id1 = db
.set(OurDBSetArgs {
id: None,
data: data1,
})
.unwrap();
let data2 = b"Second record";
let id2 = db
.set(OurDBSetArgs {
id: None,
data: data2,
})
.unwrap();
// IDs should be sequential
assert_eq!(id2, id1 + 1);
// Verify get_next_id works
let next_id = db.get_next_id().unwrap();
assert_eq!(next_id, id2 + 1);
// Clean up
db.destroy().unwrap();
}
#[test]
fn test_persistence() {
let temp_dir = get_temp_dir();
// Create data in a new database
{
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: true,
file_size: None,
keysize: None,
reset: None,
};
let mut db = OurDB::new(config).unwrap();
let test_data = b"Persistent data";
let id = db
.set(OurDBSetArgs {
id: None,
data: test_data,
})
.unwrap();
// Explicitly close the database
db.close().unwrap();
// ID should be 1 in a new database
assert_eq!(id, 1);
}
// Reopen the database and verify data persists
{
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: true,
file_size: None,
keysize: None,
reset: None,
};
let mut db = OurDB::new(config).unwrap();
// Verify data is still there
let retrieved = db.get(1).unwrap();
assert_eq!(retrieved, b"Persistent data");
// Verify incremental counter persisted
let next_id = db.get_next_id().unwrap();
assert_eq!(next_id, 2);
// Clean up
db.destroy().unwrap();
}
}
#[test]
fn test_different_keysizes() {
for keysize in [2, 3, 4, 6].iter() {
let temp_dir = get_temp_dir();
// Ensure the directory exists
std::fs::create_dir_all(&temp_dir).unwrap();
// Create a new database with specified keysize
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: true,
file_size: None,
keysize: Some(*keysize),
reset: None,
};
let mut db = OurDB::new(config).unwrap();
// Test basic operations
let test_data = b"Keysize test data";
let id = db
.set(OurDBSetArgs {
id: None,
data: test_data,
})
.unwrap();
let retrieved = db.get(id).unwrap();
assert_eq!(retrieved, test_data);
// Clean up
db.destroy().unwrap();
}
}
#[test]
fn test_large_data() {
let temp_dir = get_temp_dir();
// Create a new database
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: true,
file_size: None,
keysize: None,
reset: None,
};
let mut db = OurDB::new(config).unwrap();
// Create a large data set (60KB - within the 64KB limit)
let large_data = vec![b'X'; 60 * 1024];
// Store and retrieve large data
let id = db
.set(OurDBSetArgs {
id: None,
data: &large_data,
})
.unwrap();
let retrieved = db.get(id).unwrap();
assert_eq!(retrieved.len(), large_data.len());
assert_eq!(retrieved, large_data);
// Clean up
db.destroy().unwrap();
}
#[test]
fn test_exceed_size_limit() {
let temp_dir = get_temp_dir();
// Create a new database
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: true,
file_size: None,
keysize: None,
reset: None,
};
let mut db = OurDB::new(config).unwrap();
// Create data larger than the 64KB limit (70KB)
let oversized_data = vec![b'X'; 70 * 1024];
// Attempt to store data that exceeds the size limit
let result = db.set(OurDBSetArgs {
id: None,
data: &oversized_data,
});
// Verify that an error is returned
assert!(
result.is_err(),
"Expected an error when storing data larger than 64KB"
);
// Clean up
db.destroy().unwrap();
}
#[test]
fn test_multiple_files() {
let temp_dir = get_temp_dir();
// Create a new database with small file size to force multiple files
let config = OurDBConfig {
path: temp_dir.clone(),
incremental_mode: true,
file_size: Some(1024), // Very small file size (1KB)
keysize: Some(6), // 6-byte keysize for multiple files
reset: None,
};
let mut db = OurDB::new(config).unwrap();
// Store enough data to span multiple files
let data_size = 500; // bytes per record
let test_data = vec![b'A'; data_size];
let mut ids = Vec::new();
for _ in 0..10 {
let id = db
.set(OurDBSetArgs {
id: None,
data: &test_data,
})
.unwrap();
ids.push(id);
}
// Verify all data can be retrieved
for &id in &ids {
let retrieved = db.get(id).unwrap();
assert_eq!(retrieved.len(), data_size);
}
// Verify multiple files were created
let files = fs::read_dir(&temp_dir)
.unwrap()
.filter_map(Result::ok)
.filter(|entry| {
let path = entry.path();
path.is_file() && path.extension().map_or(false, |ext| ext == "db")
})
.count();
assert!(
files > 1,
"Expected multiple database files, found {}",
files
);
// Clean up
db.destroy().unwrap();
}