benchmarking
This commit is contained in:
172
benches/README.md
Normal file
172
benches/README.md
Normal file
@@ -0,0 +1,172 @@
|
||||
# HeroDB Benchmarks
|
||||
|
||||
This directory contains comprehensive performance benchmarks for HeroDB's storage backends (redb and sled).
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Run all benchmarks
|
||||
cargo bench
|
||||
|
||||
# Run specific suite
|
||||
cargo bench --bench single_ops
|
||||
|
||||
# Quick run (fewer samples)
|
||||
cargo bench -- --quick
|
||||
```
|
||||
|
||||
## Benchmark Suites
|
||||
|
||||
### 1. Single Operations (`single_ops.rs`)
|
||||
Measures individual operation latency:
|
||||
- **String operations**: SET, GET, DEL, EXISTS
|
||||
- **Hash operations**: HSET, HGET, HGETALL, HDEL, HEXISTS
|
||||
- **List operations**: LPUSH, RPUSH, LPOP, RPOP, LRANGE
|
||||
|
||||
### 2. Bulk Operations (`bulk_ops.rs`)
|
||||
Tests throughput with varying batch sizes:
|
||||
- Bulk insert (100, 1K, 10K records)
|
||||
- Bulk read (sequential and random)
|
||||
- Bulk update and delete
|
||||
- Mixed workload (70% reads, 30% writes)
|
||||
|
||||
### 3. Scan Operations (`scan_ops.rs`)
|
||||
Evaluates iteration and filtering:
|
||||
- SCAN with pattern matching
|
||||
- HSCAN for hash fields
|
||||
- KEYS operation
|
||||
- DBSIZE, HKEYS, HVALS
|
||||
|
||||
### 4. Concurrent Operations (`concurrent_ops.rs`)
|
||||
Simulates multi-client scenarios:
|
||||
- Concurrent writes (10, 50 clients)
|
||||
- Concurrent reads (10, 50 clients)
|
||||
- Mixed concurrent workload
|
||||
- Concurrent hash and list operations
|
||||
|
||||
### 5. Memory Profiling (`memory_profile.rs`)
|
||||
Tracks memory usage patterns:
|
||||
- Per-operation memory allocation
|
||||
- Peak memory usage
|
||||
- Memory efficiency (bytes per record)
|
||||
- Allocation count tracking
|
||||
|
||||
## Common Infrastructure
|
||||
|
||||
The `common/` directory provides shared utilities:
|
||||
|
||||
- **`data_generator.rs`**: Deterministic test data generation
|
||||
- **`backends.rs`**: Backend setup and management
|
||||
- **`metrics.rs`**: Custom metrics collection and export
|
||||
|
||||
## Results Analysis
|
||||
|
||||
### Parse Results
|
||||
|
||||
```bash
|
||||
python3 scripts/parse_results.py target/criterion --csv results.csv --json results.json
|
||||
```
|
||||
|
||||
### Compare Backends
|
||||
|
||||
```bash
|
||||
python3 scripts/compare_backends.py results.csv --export comparison.csv
|
||||
```
|
||||
|
||||
### View HTML Reports
|
||||
|
||||
Open `target/criterion/report/index.html` in a browser for interactive charts.
|
||||
|
||||
## Documentation
|
||||
|
||||
- **[Running Benchmarks](../docs/running_benchmarks.md)** - Quick start guide
|
||||
- **[Benchmarking Guide](../docs/benchmarking.md)** - Complete user guide
|
||||
- **[Architecture](../docs/benchmark_architecture.md)** - System design
|
||||
- **[Implementation Plan](../docs/benchmark_implementation_plan.md)** - Development details
|
||||
- **[Sample Results](../docs/benchmark_results_sample.md)** - Example analysis
|
||||
|
||||
## Key Features
|
||||
|
||||
✅ **Statistical Rigor**: Uses Criterion for statistically sound measurements
|
||||
✅ **Fair Comparison**: Identical test datasets across all backends
|
||||
✅ **Reproducibility**: Fixed random seeds for deterministic results
|
||||
✅ **Comprehensive Coverage**: Single ops, bulk ops, scans, concurrency
|
||||
✅ **Memory Profiling**: Custom allocator tracking
|
||||
✅ **Multiple Formats**: Terminal, CSV, JSON, HTML outputs
|
||||
|
||||
## Performance Tips
|
||||
|
||||
### For Accurate Results
|
||||
|
||||
1. **System Preparation**
|
||||
- Close unnecessary applications
|
||||
- Disable CPU frequency scaling
|
||||
- Ensure stable power supply
|
||||
|
||||
2. **Benchmark Configuration**
|
||||
- Use sufficient sample size (100+)
|
||||
- Allow proper warm-up time
|
||||
- Run multiple iterations
|
||||
|
||||
3. **Environment Isolation**
|
||||
- Use temporary directories
|
||||
- Clean state between benchmarks
|
||||
- Avoid shared resources
|
||||
|
||||
### For Faster Iteration
|
||||
|
||||
```bash
|
||||
# Quick mode (fewer samples)
|
||||
cargo bench -- --quick
|
||||
|
||||
# Specific operation only
|
||||
cargo bench -- single_ops/strings/set
|
||||
|
||||
# Specific backend only
|
||||
cargo bench -- redb
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### High Variance
|
||||
- Close background applications
|
||||
- Disable CPU frequency scaling
|
||||
- Increase sample size
|
||||
|
||||
### Out of Memory
|
||||
- Run suites separately
|
||||
- Reduce dataset sizes
|
||||
- Increase system swap
|
||||
|
||||
### Slow Benchmarks
|
||||
- Use `--quick` flag
|
||||
- Run specific benchmarks
|
||||
- Reduce measurement time
|
||||
|
||||
See [Running Benchmarks](../docs/running_benchmarks.md) for detailed troubleshooting.
|
||||
|
||||
## Contributing
|
||||
|
||||
When adding new benchmarks:
|
||||
|
||||
1. Follow existing patterns in benchmark files
|
||||
2. Use common infrastructure (data_generator, backends)
|
||||
3. Ensure fair comparison between backends
|
||||
4. Add documentation for new metrics
|
||||
5. Test with both `--quick` and full runs
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
single_ops/strings/set/redb/100bytes
|
||||
time: [1.234 µs 1.245 µs 1.256 µs]
|
||||
thrpt: [802.5K ops/s 810.2K ops/s 818.1K ops/s]
|
||||
|
||||
single_ops/strings/set/sled/100bytes
|
||||
time: [1.567 µs 1.578 µs 1.589 µs]
|
||||
thrpt: [629.5K ops/s 633.7K ops/s 638.1K ops/s]
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
Same as HeroDB project.
|
||||
336
benches/bulk_ops.rs
Normal file
336
benches/bulk_ops.rs
Normal file
@@ -0,0 +1,336 @@
|
||||
// benches/bulk_ops.rs
|
||||
use criterion::{criterion_group, criterion_main, Criterion, BenchmarkId, BatchSize};
|
||||
|
||||
mod common;
|
||||
use common::*;
|
||||
|
||||
/// Benchmark bulk insert operations with varying batch sizes
|
||||
fn bench_bulk_insert(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("bulk_ops/insert");
|
||||
|
||||
for size in [100, 1_000, 10_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend_type.name()), size),
|
||||
&(backend_type, size),
|
||||
|b, &(backend_type, size)| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let backend = BenchmarkBackend::new(backend_type).unwrap();
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let data = generator.generate_string_pairs(size, 100);
|
||||
(backend, data)
|
||||
},
|
||||
|(backend, data)| {
|
||||
for (key, value) in data {
|
||||
backend.storage.set(key, value).unwrap();
|
||||
}
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark bulk sequential read operations
|
||||
fn bench_bulk_read_sequential(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("bulk_ops/read_sequential");
|
||||
|
||||
for size in [1_000, 10_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend(backend_type, size, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend.name()), size),
|
||||
&(backend, size),
|
||||
|b, (backend, size)| {
|
||||
b.iter(|| {
|
||||
for i in 0..*size {
|
||||
let key = generator.generate_key("bench:key", i);
|
||||
backend.storage.get(&key).unwrap();
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark bulk random read operations
|
||||
fn bench_bulk_read_random(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("bulk_ops/read_random");
|
||||
|
||||
for size in [1_000, 10_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend(backend_type, size, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
// Pre-generate random indices for fair comparison
|
||||
let indices: Vec<usize> = (0..size)
|
||||
.map(|_| rand::random::<usize>() % size)
|
||||
.collect();
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend.name()), size),
|
||||
&(backend, indices),
|
||||
|b, (backend, indices)| {
|
||||
b.iter(|| {
|
||||
for &idx in indices {
|
||||
let key = generator.generate_key("bench:key", idx);
|
||||
backend.storage.get(&key).unwrap();
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark bulk update operations
|
||||
fn bench_bulk_update(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("bulk_ops/update");
|
||||
|
||||
for size in [100, 1_000, 10_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend_type.name()), size),
|
||||
&(backend_type, size),
|
||||
|b, &(backend_type, size)| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let backend = setup_populated_backend(backend_type, size, 100).unwrap();
|
||||
let mut generator = DataGenerator::new(43); // Different seed for updates
|
||||
let updates = generator.generate_string_pairs(size, 100);
|
||||
(backend, updates)
|
||||
},
|
||||
|(backend, updates)| {
|
||||
for (key, value) in updates {
|
||||
backend.storage.set(key, value).unwrap();
|
||||
}
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark bulk delete operations
|
||||
fn bench_bulk_delete(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("bulk_ops/delete");
|
||||
|
||||
for size in [100, 1_000, 10_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend_type.name()), size),
|
||||
&(backend_type, size),
|
||||
|b, &(backend_type, size)| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let backend = setup_populated_backend(backend_type, size, 100).unwrap();
|
||||
let generator = DataGenerator::new(42);
|
||||
let keys: Vec<String> = (0..size)
|
||||
.map(|i| generator.generate_key("bench:key", i))
|
||||
.collect();
|
||||
(backend, keys)
|
||||
},
|
||||
|(backend, keys)| {
|
||||
for key in keys {
|
||||
backend.storage.del(key).unwrap();
|
||||
}
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark bulk hash insert operations
|
||||
fn bench_bulk_hash_insert(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("bulk_ops/hash_insert");
|
||||
|
||||
for size in [100, 1_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend_type.name()), size),
|
||||
&(backend_type, size),
|
||||
|b, &(backend_type, size)| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let backend = BenchmarkBackend::new(backend_type).unwrap();
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let data = generator.generate_hash_data(size, 10, 100);
|
||||
(backend, data)
|
||||
},
|
||||
|(backend, data)| {
|
||||
for (key, fields) in data {
|
||||
backend.storage.hset(&key, fields).unwrap();
|
||||
}
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark bulk hash read operations (HGETALL)
|
||||
fn bench_bulk_hash_read(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("bulk_ops/hash_read");
|
||||
|
||||
for size in [100, 1_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend_hashes(backend_type, size, 10, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend.name()), size),
|
||||
&(backend, size),
|
||||
|b, (backend, size)| {
|
||||
b.iter(|| {
|
||||
for i in 0..*size {
|
||||
let key = generator.generate_key("bench:hash", i);
|
||||
backend.storage.hgetall(&key).unwrap();
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark bulk list insert operations
|
||||
fn bench_bulk_list_insert(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("bulk_ops/list_insert");
|
||||
|
||||
for size in [100, 1_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend_type.name()), size),
|
||||
&(backend_type, size),
|
||||
|b, &(backend_type, size)| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let backend = BenchmarkBackend::new(backend_type).unwrap();
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let data = generator.generate_list_data(size, 10, 100);
|
||||
(backend, data)
|
||||
},
|
||||
|(backend, data)| {
|
||||
for (key, elements) in data {
|
||||
backend.storage.rpush(&key, elements).unwrap();
|
||||
}
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark bulk list read operations (LRANGE)
|
||||
fn bench_bulk_list_read(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("bulk_ops/list_read");
|
||||
|
||||
for size in [100, 1_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend_lists(backend_type, size, 10, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend.name()), size),
|
||||
&(backend, size),
|
||||
|b, (backend, size)| {
|
||||
b.iter(|| {
|
||||
for i in 0..*size {
|
||||
let key = generator.generate_key("bench:list", i);
|
||||
backend.storage.lrange(&key, 0, -1).unwrap();
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark mixed workload (70% reads, 30% writes)
|
||||
fn bench_mixed_workload(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("bulk_ops/mixed_workload");
|
||||
|
||||
for size in [1_000, 10_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend(backend_type, size, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let mut generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend.name()), size),
|
||||
&(backend, size),
|
||||
|b, (backend, size)| {
|
||||
b.iter(|| {
|
||||
for i in 0..*size {
|
||||
if i % 10 < 7 {
|
||||
// 70% reads
|
||||
let key = generator.generate_key("bench:key", i % size);
|
||||
backend.storage.get(&key).unwrap();
|
||||
} else {
|
||||
// 30% writes
|
||||
let key = generator.generate_key("bench:key", i);
|
||||
let value = generator.generate_value(100);
|
||||
backend.storage.set(key, value).unwrap();
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
criterion_group!(
|
||||
benches,
|
||||
bench_bulk_insert,
|
||||
bench_bulk_read_sequential,
|
||||
bench_bulk_read_random,
|
||||
bench_bulk_update,
|
||||
bench_bulk_delete,
|
||||
bench_bulk_hash_insert,
|
||||
bench_bulk_hash_read,
|
||||
bench_bulk_list_insert,
|
||||
bench_bulk_list_read,
|
||||
bench_mixed_workload,
|
||||
);
|
||||
|
||||
criterion_main!(benches);
|
||||
197
benches/common/backends.rs
Normal file
197
benches/common/backends.rs
Normal file
@@ -0,0 +1,197 @@
|
||||
// benches/common/backends.rs
|
||||
use herodb::storage::Storage;
|
||||
use herodb::storage_sled::SledStorage;
|
||||
use herodb::storage_trait::StorageBackend;
|
||||
use std::sync::Arc;
|
||||
use tempfile::TempDir;
|
||||
|
||||
/// Backend type identifier
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum BackendType {
|
||||
Redb,
|
||||
Sled,
|
||||
}
|
||||
|
||||
impl BackendType {
|
||||
pub fn name(&self) -> &'static str {
|
||||
match self {
|
||||
BackendType::Redb => "redb",
|
||||
BackendType::Sled => "sled",
|
||||
}
|
||||
}
|
||||
|
||||
pub fn all() -> Vec<BackendType> {
|
||||
vec![BackendType::Redb, BackendType::Sled]
|
||||
}
|
||||
}
|
||||
|
||||
/// Wrapper for benchmark backends with automatic cleanup
|
||||
pub struct BenchmarkBackend {
|
||||
pub storage: Arc<dyn StorageBackend>,
|
||||
pub backend_type: BackendType,
|
||||
_temp_dir: TempDir, // Kept for automatic cleanup
|
||||
}
|
||||
|
||||
impl BenchmarkBackend {
|
||||
/// Create a new redb backend for benchmarking
|
||||
pub fn new_redb() -> Result<Self, Box<dyn std::error::Error>> {
|
||||
let temp_dir = TempDir::new()?;
|
||||
let db_path = temp_dir.path().join("bench.db");
|
||||
let storage = Storage::new(db_path, false, None)?;
|
||||
|
||||
Ok(Self {
|
||||
storage: Arc::new(storage),
|
||||
backend_type: BackendType::Redb,
|
||||
_temp_dir: temp_dir,
|
||||
})
|
||||
}
|
||||
|
||||
/// Create a new sled backend for benchmarking
|
||||
pub fn new_sled() -> Result<Self, Box<dyn std::error::Error>> {
|
||||
let temp_dir = TempDir::new()?;
|
||||
let db_path = temp_dir.path().join("bench.sled");
|
||||
let storage = SledStorage::new(db_path, false, None)?;
|
||||
|
||||
Ok(Self {
|
||||
storage: Arc::new(storage),
|
||||
backend_type: BackendType::Sled,
|
||||
_temp_dir: temp_dir,
|
||||
})
|
||||
}
|
||||
|
||||
/// Create a backend of the specified type
|
||||
pub fn new(backend_type: BackendType) -> Result<Self, Box<dyn std::error::Error>> {
|
||||
match backend_type {
|
||||
BackendType::Redb => Self::new_redb(),
|
||||
BackendType::Sled => Self::new_sled(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the backend name for display
|
||||
pub fn name(&self) -> &'static str {
|
||||
self.backend_type.name()
|
||||
}
|
||||
|
||||
/// Pre-populate the backend with test data
|
||||
pub fn populate_strings(&self, data: &[(String, String)]) -> Result<(), Box<dyn std::error::Error>> {
|
||||
for (key, value) in data {
|
||||
self.storage.set(key.clone(), value.clone())?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Pre-populate with hash data
|
||||
pub fn populate_hashes(&self, data: &[(String, Vec<(String, String)>)]) -> Result<(), Box<dyn std::error::Error>> {
|
||||
for (key, fields) in data {
|
||||
self.storage.hset(key, fields.clone())?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Pre-populate with list data
|
||||
pub fn populate_lists(&self, data: &[(String, Vec<String>)]) -> Result<(), Box<dyn std::error::Error>> {
|
||||
for (key, elements) in data {
|
||||
self.storage.rpush(key, elements.clone())?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Clear all data from the backend
|
||||
pub fn clear(&self) -> Result<(), Box<dyn std::error::Error>> {
|
||||
self.storage.flushdb()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get the number of keys in the database
|
||||
pub fn dbsize(&self) -> Result<i64, Box<dyn std::error::Error>> {
|
||||
Ok(self.storage.dbsize()?)
|
||||
}
|
||||
}
|
||||
|
||||
/// Helper function to create and populate a backend for read benchmarks
|
||||
pub fn setup_populated_backend(
|
||||
backend_type: BackendType,
|
||||
num_keys: usize,
|
||||
value_size: usize,
|
||||
) -> Result<BenchmarkBackend, Box<dyn std::error::Error>> {
|
||||
use super::DataGenerator;
|
||||
|
||||
let backend = BenchmarkBackend::new(backend_type)?;
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let data = generator.generate_string_pairs(num_keys, value_size);
|
||||
backend.populate_strings(&data)?;
|
||||
|
||||
Ok(backend)
|
||||
}
|
||||
|
||||
/// Helper function to create and populate a backend with hash data
|
||||
pub fn setup_populated_backend_hashes(
|
||||
backend_type: BackendType,
|
||||
num_hashes: usize,
|
||||
fields_per_hash: usize,
|
||||
value_size: usize,
|
||||
) -> Result<BenchmarkBackend, Box<dyn std::error::Error>> {
|
||||
use super::DataGenerator;
|
||||
|
||||
let backend = BenchmarkBackend::new(backend_type)?;
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let data = generator.generate_hash_data(num_hashes, fields_per_hash, value_size);
|
||||
backend.populate_hashes(&data)?;
|
||||
|
||||
Ok(backend)
|
||||
}
|
||||
|
||||
/// Helper function to create and populate a backend with list data
|
||||
pub fn setup_populated_backend_lists(
|
||||
backend_type: BackendType,
|
||||
num_lists: usize,
|
||||
elements_per_list: usize,
|
||||
element_size: usize,
|
||||
) -> Result<BenchmarkBackend, Box<dyn std::error::Error>> {
|
||||
use super::DataGenerator;
|
||||
|
||||
let backend = BenchmarkBackend::new(backend_type)?;
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let data = generator.generate_list_data(num_lists, elements_per_list, element_size);
|
||||
backend.populate_lists(&data)?;
|
||||
|
||||
Ok(backend)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_backend_creation() {
|
||||
let redb = BenchmarkBackend::new_redb();
|
||||
assert!(redb.is_ok());
|
||||
|
||||
let sled = BenchmarkBackend::new_sled();
|
||||
assert!(sled.is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_backend_populate() {
|
||||
let backend = BenchmarkBackend::new_redb().unwrap();
|
||||
let data = vec![
|
||||
("key1".to_string(), "value1".to_string()),
|
||||
("key2".to_string(), "value2".to_string()),
|
||||
];
|
||||
|
||||
backend.populate_strings(&data).unwrap();
|
||||
assert_eq!(backend.dbsize().unwrap(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_backend_clear() {
|
||||
let backend = BenchmarkBackend::new_redb().unwrap();
|
||||
let data = vec![("key1".to_string(), "value1".to_string())];
|
||||
|
||||
backend.populate_strings(&data).unwrap();
|
||||
assert_eq!(backend.dbsize().unwrap(), 1);
|
||||
|
||||
backend.clear().unwrap();
|
||||
assert_eq!(backend.dbsize().unwrap(), 0);
|
||||
}
|
||||
}
|
||||
131
benches/common/data_generator.rs
Normal file
131
benches/common/data_generator.rs
Normal file
@@ -0,0 +1,131 @@
|
||||
// benches/common/data_generator.rs
|
||||
use rand::{Rng, SeedableRng};
|
||||
use rand::rngs::StdRng;
|
||||
|
||||
/// Deterministic data generator for benchmarks
|
||||
pub struct DataGenerator {
|
||||
rng: StdRng,
|
||||
}
|
||||
|
||||
impl DataGenerator {
|
||||
/// Create a new data generator with a fixed seed for reproducibility
|
||||
pub fn new(seed: u64) -> Self {
|
||||
Self {
|
||||
rng: StdRng::seed_from_u64(seed),
|
||||
}
|
||||
}
|
||||
|
||||
/// Generate a single key with the given prefix and ID
|
||||
pub fn generate_key(&self, prefix: &str, id: usize) -> String {
|
||||
format!("{}:{:08}", prefix, id)
|
||||
}
|
||||
|
||||
/// Generate a random string value of the specified size
|
||||
pub fn generate_value(&mut self, size: usize) -> String {
|
||||
const CHARSET: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
|
||||
(0..size)
|
||||
.map(|_| {
|
||||
let idx = self.rng.gen_range(0..CHARSET.len());
|
||||
CHARSET[idx] as char
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Generate a batch of key-value pairs
|
||||
pub fn generate_string_pairs(&mut self, count: usize, value_size: usize) -> Vec<(String, String)> {
|
||||
(0..count)
|
||||
.map(|i| {
|
||||
let key = self.generate_key("bench:key", i);
|
||||
let value = self.generate_value(value_size);
|
||||
(key, value)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Generate hash data (key -> field-value pairs)
|
||||
pub fn generate_hash_data(&mut self, num_hashes: usize, fields_per_hash: usize, value_size: usize)
|
||||
-> Vec<(String, Vec<(String, String)>)> {
|
||||
(0..num_hashes)
|
||||
.map(|i| {
|
||||
let hash_key = self.generate_key("bench:hash", i);
|
||||
let fields: Vec<(String, String)> = (0..fields_per_hash)
|
||||
.map(|j| {
|
||||
let field = format!("field{}", j);
|
||||
let value = self.generate_value(value_size);
|
||||
(field, value)
|
||||
})
|
||||
.collect();
|
||||
(hash_key, fields)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Generate list data (key -> list of elements)
|
||||
pub fn generate_list_data(&mut self, num_lists: usize, elements_per_list: usize, element_size: usize)
|
||||
-> Vec<(String, Vec<String>)> {
|
||||
(0..num_lists)
|
||||
.map(|i| {
|
||||
let list_key = self.generate_key("bench:list", i);
|
||||
let elements: Vec<String> = (0..elements_per_list)
|
||||
.map(|_| self.generate_value(element_size))
|
||||
.collect();
|
||||
(list_key, elements)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Generate keys for pattern matching tests
|
||||
pub fn generate_pattern_keys(&mut self, count: usize) -> Vec<String> {
|
||||
let mut keys = Vec::new();
|
||||
|
||||
// Generate keys with different patterns
|
||||
for i in 0..count / 3 {
|
||||
keys.push(format!("user:{}:profile", i));
|
||||
}
|
||||
for i in 0..count / 3 {
|
||||
keys.push(format!("session:{}:data", i));
|
||||
}
|
||||
for i in 0..count / 3 {
|
||||
keys.push(format!("cache:{}:value", i));
|
||||
}
|
||||
|
||||
keys
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_deterministic_generation() {
|
||||
let mut generator1 = DataGenerator::new(42);
|
||||
let mut generator2 = DataGenerator::new(42);
|
||||
|
||||
let pairs1 = generator1.generate_string_pairs(10, 50);
|
||||
let pairs2 = generator2.generate_string_pairs(10, 50);
|
||||
|
||||
assert_eq!(pairs1, pairs2, "Same seed should produce same data");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_value_size() {
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let value = generator.generate_value(100);
|
||||
assert_eq!(value.len(), 100);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_hash_generation() {
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let hashes = generator.generate_hash_data(5, 10, 50);
|
||||
|
||||
assert_eq!(hashes.len(), 5);
|
||||
for (_, fields) in hashes {
|
||||
assert_eq!(fields.len(), 10);
|
||||
for (_, value) in fields {
|
||||
assert_eq!(value.len(), 50);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
289
benches/common/metrics.rs
Normal file
289
benches/common/metrics.rs
Normal file
@@ -0,0 +1,289 @@
|
||||
// benches/common/metrics.rs
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::time::Duration;
|
||||
|
||||
/// Custom metrics for benchmark results
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct BenchmarkMetrics {
|
||||
pub operation: String,
|
||||
pub backend: String,
|
||||
pub dataset_size: usize,
|
||||
pub mean_ns: u64,
|
||||
pub median_ns: u64,
|
||||
pub p95_ns: u64,
|
||||
pub p99_ns: u64,
|
||||
pub std_dev_ns: u64,
|
||||
pub throughput_ops_sec: f64,
|
||||
}
|
||||
|
||||
impl BenchmarkMetrics {
|
||||
pub fn new(
|
||||
operation: String,
|
||||
backend: String,
|
||||
dataset_size: usize,
|
||||
) -> Self {
|
||||
Self {
|
||||
operation,
|
||||
backend,
|
||||
dataset_size,
|
||||
mean_ns: 0,
|
||||
median_ns: 0,
|
||||
p95_ns: 0,
|
||||
p99_ns: 0,
|
||||
std_dev_ns: 0,
|
||||
throughput_ops_sec: 0.0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Convert to CSV row format
|
||||
pub fn to_csv_row(&self) -> String {
|
||||
format!(
|
||||
"{},{},{},{},{},{},{},{},{:.2}",
|
||||
self.backend,
|
||||
self.operation,
|
||||
self.dataset_size,
|
||||
self.mean_ns,
|
||||
self.median_ns,
|
||||
self.p95_ns,
|
||||
self.p99_ns,
|
||||
self.std_dev_ns,
|
||||
self.throughput_ops_sec
|
||||
)
|
||||
}
|
||||
|
||||
/// Get CSV header
|
||||
pub fn csv_header() -> String {
|
||||
"backend,operation,dataset_size,mean_ns,median_ns,p95_ns,p99_ns,std_dev_ns,throughput_ops_sec".to_string()
|
||||
}
|
||||
|
||||
/// Convert to JSON
|
||||
pub fn to_json(&self) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"backend": self.backend,
|
||||
"operation": self.operation,
|
||||
"dataset_size": self.dataset_size,
|
||||
"metrics": {
|
||||
"mean_ns": self.mean_ns,
|
||||
"median_ns": self.median_ns,
|
||||
"p95_ns": self.p95_ns,
|
||||
"p99_ns": self.p99_ns,
|
||||
"std_dev_ns": self.std_dev_ns,
|
||||
"throughput_ops_sec": self.throughput_ops_sec
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Calculate throughput from mean latency
|
||||
pub fn calculate_throughput(&mut self) {
|
||||
if self.mean_ns > 0 {
|
||||
self.throughput_ops_sec = 1_000_000_000.0 / self.mean_ns as f64;
|
||||
}
|
||||
}
|
||||
|
||||
/// Format duration for display
|
||||
pub fn format_duration(nanos: u64) -> String {
|
||||
if nanos < 1_000 {
|
||||
format!("{} ns", nanos)
|
||||
} else if nanos < 1_000_000 {
|
||||
format!("{:.2} µs", nanos as f64 / 1_000.0)
|
||||
} else if nanos < 1_000_000_000 {
|
||||
format!("{:.2} ms", nanos as f64 / 1_000_000.0)
|
||||
} else {
|
||||
format!("{:.2} s", nanos as f64 / 1_000_000_000.0)
|
||||
}
|
||||
}
|
||||
|
||||
/// Pretty print the metrics
|
||||
pub fn display(&self) -> String {
|
||||
format!(
|
||||
"{}/{} (n={}): mean={}, median={}, p95={}, p99={}, throughput={:.0} ops/sec",
|
||||
self.backend,
|
||||
self.operation,
|
||||
self.dataset_size,
|
||||
Self::format_duration(self.mean_ns),
|
||||
Self::format_duration(self.median_ns),
|
||||
Self::format_duration(self.p95_ns),
|
||||
Self::format_duration(self.p99_ns),
|
||||
self.throughput_ops_sec
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/// Memory metrics for profiling
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct MemoryMetrics {
|
||||
pub operation: String,
|
||||
pub backend: String,
|
||||
pub allocations: usize,
|
||||
pub peak_bytes: usize,
|
||||
pub avg_bytes_per_op: f64,
|
||||
}
|
||||
|
||||
impl MemoryMetrics {
|
||||
pub fn new(operation: String, backend: String) -> Self {
|
||||
Self {
|
||||
operation,
|
||||
backend,
|
||||
allocations: 0,
|
||||
peak_bytes: 0,
|
||||
avg_bytes_per_op: 0.0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Convert to CSV row format
|
||||
pub fn to_csv_row(&self) -> String {
|
||||
format!(
|
||||
"{},{},{},{},{:.2}",
|
||||
self.backend,
|
||||
self.operation,
|
||||
self.allocations,
|
||||
self.peak_bytes,
|
||||
self.avg_bytes_per_op
|
||||
)
|
||||
}
|
||||
|
||||
/// Get CSV header
|
||||
pub fn csv_header() -> String {
|
||||
"backend,operation,allocations,peak_bytes,avg_bytes_per_op".to_string()
|
||||
}
|
||||
|
||||
/// Format bytes for display
|
||||
pub fn format_bytes(bytes: usize) -> String {
|
||||
if bytes < 1024 {
|
||||
format!("{} B", bytes)
|
||||
} else if bytes < 1024 * 1024 {
|
||||
format!("{:.2} KB", bytes as f64 / 1024.0)
|
||||
} else if bytes < 1024 * 1024 * 1024 {
|
||||
format!("{:.2} MB", bytes as f64 / (1024.0 * 1024.0))
|
||||
} else {
|
||||
format!("{:.2} GB", bytes as f64 / (1024.0 * 1024.0 * 1024.0))
|
||||
}
|
||||
}
|
||||
|
||||
/// Pretty print the metrics
|
||||
pub fn display(&self) -> String {
|
||||
format!(
|
||||
"{}/{}: {} allocations, peak={}, avg={}",
|
||||
self.backend,
|
||||
self.operation,
|
||||
self.allocations,
|
||||
Self::format_bytes(self.peak_bytes),
|
||||
Self::format_bytes(self.avg_bytes_per_op as usize)
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/// Collection of benchmark results for comparison
|
||||
#[derive(Debug, Default)]
|
||||
pub struct BenchmarkResults {
|
||||
pub metrics: Vec<BenchmarkMetrics>,
|
||||
pub memory_metrics: Vec<MemoryMetrics>,
|
||||
}
|
||||
|
||||
impl BenchmarkResults {
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
pub fn add_metric(&mut self, metric: BenchmarkMetrics) {
|
||||
self.metrics.push(metric);
|
||||
}
|
||||
|
||||
pub fn add_memory_metric(&mut self, metric: MemoryMetrics) {
|
||||
self.memory_metrics.push(metric);
|
||||
}
|
||||
|
||||
/// Export all metrics to CSV format
|
||||
pub fn to_csv(&self) -> String {
|
||||
let mut output = String::new();
|
||||
|
||||
if !self.metrics.is_empty() {
|
||||
output.push_str(&BenchmarkMetrics::csv_header());
|
||||
output.push('\n');
|
||||
for metric in &self.metrics {
|
||||
output.push_str(&metric.to_csv_row());
|
||||
output.push('\n');
|
||||
}
|
||||
}
|
||||
|
||||
if !self.memory_metrics.is_empty() {
|
||||
output.push('\n');
|
||||
output.push_str(&MemoryMetrics::csv_header());
|
||||
output.push('\n');
|
||||
for metric in &self.memory_metrics {
|
||||
output.push_str(&metric.to_csv_row());
|
||||
output.push('\n');
|
||||
}
|
||||
}
|
||||
|
||||
output
|
||||
}
|
||||
|
||||
/// Export all metrics to JSON format
|
||||
pub fn to_json(&self) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"benchmarks": self.metrics.iter().map(|m| m.to_json()).collect::<Vec<_>>(),
|
||||
"memory": self.memory_metrics
|
||||
})
|
||||
}
|
||||
|
||||
/// Save results to a file
|
||||
pub fn save_csv(&self, path: &str) -> std::io::Result<()> {
|
||||
std::fs::write(path, self.to_csv())
|
||||
}
|
||||
|
||||
pub fn save_json(&self, path: &str) -> std::io::Result<()> {
|
||||
let json = serde_json::to_string_pretty(&self.to_json())?;
|
||||
std::fs::write(path, json)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_metrics_creation() {
|
||||
let mut metric = BenchmarkMetrics::new(
|
||||
"set".to_string(),
|
||||
"redb".to_string(),
|
||||
1000,
|
||||
);
|
||||
metric.mean_ns = 1_245;
|
||||
metric.calculate_throughput();
|
||||
|
||||
assert!(metric.throughput_ops_sec > 0.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_csv_export() {
|
||||
let mut results = BenchmarkResults::new();
|
||||
let mut metric = BenchmarkMetrics::new(
|
||||
"set".to_string(),
|
||||
"redb".to_string(),
|
||||
1000,
|
||||
);
|
||||
metric.mean_ns = 1_245;
|
||||
metric.calculate_throughput();
|
||||
|
||||
results.add_metric(metric);
|
||||
let csv = results.to_csv();
|
||||
|
||||
assert!(csv.contains("backend,operation"));
|
||||
assert!(csv.contains("redb,set"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_duration_formatting() {
|
||||
assert_eq!(BenchmarkMetrics::format_duration(500), "500 ns");
|
||||
assert_eq!(BenchmarkMetrics::format_duration(1_500), "1.50 µs");
|
||||
assert_eq!(BenchmarkMetrics::format_duration(1_500_000), "1.50 ms");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_bytes_formatting() {
|
||||
assert_eq!(MemoryMetrics::format_bytes(512), "512 B");
|
||||
assert_eq!(MemoryMetrics::format_bytes(2048), "2.00 KB");
|
||||
assert_eq!(MemoryMetrics::format_bytes(2_097_152), "2.00 MB");
|
||||
}
|
||||
}
|
||||
8
benches/common/mod.rs
Normal file
8
benches/common/mod.rs
Normal file
@@ -0,0 +1,8 @@
|
||||
// benches/common/mod.rs
|
||||
pub mod data_generator;
|
||||
pub mod backends;
|
||||
pub mod metrics;
|
||||
|
||||
pub use data_generator::*;
|
||||
pub use backends::*;
|
||||
pub use metrics::*;
|
||||
317
benches/concurrent_ops.rs
Normal file
317
benches/concurrent_ops.rs
Normal file
@@ -0,0 +1,317 @@
|
||||
// benches/concurrent_ops.rs
|
||||
use criterion::{criterion_group, criterion_main, Criterion, BenchmarkId};
|
||||
use tokio::runtime::Runtime;
|
||||
use std::sync::Arc;
|
||||
|
||||
mod common;
|
||||
use common::*;
|
||||
|
||||
/// Benchmark concurrent write operations
|
||||
fn bench_concurrent_writes(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("concurrent_ops/writes");
|
||||
|
||||
for num_clients in [10, 50] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = BenchmarkBackend::new(backend_type).expect("Failed to create backend");
|
||||
let storage = backend.storage.clone();
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/clients", backend.name()), num_clients),
|
||||
&(storage, num_clients),
|
||||
|b, (storage, num_clients)| {
|
||||
let rt = Runtime::new().unwrap();
|
||||
b.to_async(&rt).iter(|| {
|
||||
let storage = storage.clone();
|
||||
let num_clients = *num_clients;
|
||||
async move {
|
||||
let mut tasks = Vec::new();
|
||||
|
||||
for client_id in 0..num_clients {
|
||||
let storage = storage.clone();
|
||||
let task = tokio::spawn(async move {
|
||||
let mut generator = DataGenerator::new(42 + client_id as u64);
|
||||
for i in 0..100 {
|
||||
let key = format!("client:{}:key:{}", client_id, i);
|
||||
let value = generator.generate_value(100);
|
||||
storage.set(key, value).unwrap();
|
||||
}
|
||||
});
|
||||
tasks.push(task);
|
||||
}
|
||||
|
||||
for task in tasks {
|
||||
task.await.unwrap();
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark concurrent read operations
|
||||
fn bench_concurrent_reads(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("concurrent_ops/reads");
|
||||
|
||||
for num_clients in [10, 50] {
|
||||
for backend_type in BackendType::all() {
|
||||
// Pre-populate with data
|
||||
let backend = setup_populated_backend(backend_type, 10_000, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let storage = backend.storage.clone();
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/clients", backend.name()), num_clients),
|
||||
&(storage, num_clients),
|
||||
|b, (storage, num_clients)| {
|
||||
let rt = Runtime::new().unwrap();
|
||||
b.to_async(&rt).iter(|| {
|
||||
let storage = storage.clone();
|
||||
let num_clients = *num_clients;
|
||||
async move {
|
||||
let mut tasks = Vec::new();
|
||||
|
||||
for client_id in 0..num_clients {
|
||||
let storage = storage.clone();
|
||||
let task = tokio::spawn(async move {
|
||||
let generator = DataGenerator::new(42);
|
||||
for i in 0..100 {
|
||||
let key_id = (client_id * 100 + i) % 10_000;
|
||||
let key = generator.generate_key("bench:key", key_id);
|
||||
storage.get(&key).unwrap();
|
||||
}
|
||||
});
|
||||
tasks.push(task);
|
||||
}
|
||||
|
||||
for task in tasks {
|
||||
task.await.unwrap();
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark mixed concurrent workload (70% reads, 30% writes)
|
||||
fn bench_concurrent_mixed(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("concurrent_ops/mixed");
|
||||
|
||||
for num_clients in [10, 50] {
|
||||
for backend_type in BackendType::all() {
|
||||
// Pre-populate with data
|
||||
let backend = setup_populated_backend(backend_type, 10_000, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let storage = backend.storage.clone();
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/clients", backend.name()), num_clients),
|
||||
&(storage, num_clients),
|
||||
|b, (storage, num_clients)| {
|
||||
let rt = Runtime::new().unwrap();
|
||||
b.to_async(&rt).iter(|| {
|
||||
let storage = storage.clone();
|
||||
let num_clients = *num_clients;
|
||||
async move {
|
||||
let mut tasks = Vec::new();
|
||||
|
||||
for client_id in 0..num_clients {
|
||||
let storage = storage.clone();
|
||||
let task = tokio::spawn(async move {
|
||||
let mut generator = DataGenerator::new(42 + client_id as u64);
|
||||
for i in 0..100 {
|
||||
if i % 10 < 7 {
|
||||
// 70% reads
|
||||
let key_id = (client_id * 100 + i) % 10_000;
|
||||
let key = generator.generate_key("bench:key", key_id);
|
||||
storage.get(&key).unwrap();
|
||||
} else {
|
||||
// 30% writes
|
||||
let key = format!("client:{}:key:{}", client_id, i);
|
||||
let value = generator.generate_value(100);
|
||||
storage.set(key, value).unwrap();
|
||||
}
|
||||
}
|
||||
});
|
||||
tasks.push(task);
|
||||
}
|
||||
|
||||
for task in tasks {
|
||||
task.await.unwrap();
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark concurrent hash operations
|
||||
fn bench_concurrent_hash_ops(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("concurrent_ops/hash_ops");
|
||||
|
||||
for num_clients in [10, 50] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = BenchmarkBackend::new(backend_type).expect("Failed to create backend");
|
||||
let storage = backend.storage.clone();
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/clients", backend.name()), num_clients),
|
||||
&(storage, num_clients),
|
||||
|b, (storage, num_clients)| {
|
||||
let rt = Runtime::new().unwrap();
|
||||
b.to_async(&rt).iter(|| {
|
||||
let storage = storage.clone();
|
||||
let num_clients = *num_clients;
|
||||
async move {
|
||||
let mut tasks = Vec::new();
|
||||
|
||||
for client_id in 0..num_clients {
|
||||
let storage = storage.clone();
|
||||
let task = tokio::spawn(async move {
|
||||
let mut generator = DataGenerator::new(42 + client_id as u64);
|
||||
for i in 0..50 {
|
||||
let key = format!("client:{}:hash:{}", client_id, i);
|
||||
let field = format!("field{}", i % 10);
|
||||
let value = generator.generate_value(100);
|
||||
storage.hset(&key, vec![(field, value)]).unwrap();
|
||||
}
|
||||
});
|
||||
tasks.push(task);
|
||||
}
|
||||
|
||||
for task in tasks {
|
||||
task.await.unwrap();
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark concurrent list operations
|
||||
fn bench_concurrent_list_ops(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("concurrent_ops/list_ops");
|
||||
|
||||
for num_clients in [10, 50] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = BenchmarkBackend::new(backend_type).expect("Failed to create backend");
|
||||
let storage = backend.storage.clone();
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/clients", backend.name()), num_clients),
|
||||
&(storage, num_clients),
|
||||
|b, (storage, num_clients)| {
|
||||
let rt = Runtime::new().unwrap();
|
||||
b.to_async(&rt).iter(|| {
|
||||
let storage = storage.clone();
|
||||
let num_clients = *num_clients;
|
||||
async move {
|
||||
let mut tasks = Vec::new();
|
||||
|
||||
for client_id in 0..num_clients {
|
||||
let storage = storage.clone();
|
||||
let task = tokio::spawn(async move {
|
||||
let mut generator = DataGenerator::new(42 + client_id as u64);
|
||||
for i in 0..50 {
|
||||
let key = format!("client:{}:list:{}", client_id, i);
|
||||
let element = generator.generate_value(100);
|
||||
storage.rpush(&key, vec![element]).unwrap();
|
||||
}
|
||||
});
|
||||
tasks.push(task);
|
||||
}
|
||||
|
||||
for task in tasks {
|
||||
task.await.unwrap();
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark concurrent scan operations
|
||||
fn bench_concurrent_scans(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("concurrent_ops/scans");
|
||||
|
||||
for num_clients in [10, 50] {
|
||||
for backend_type in BackendType::all() {
|
||||
// Pre-populate with data
|
||||
let backend = setup_populated_backend(backend_type, 10_000, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let storage = backend.storage.clone();
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/clients", backend.name()), num_clients),
|
||||
&(storage, num_clients),
|
||||
|b, (storage, num_clients)| {
|
||||
let rt = Runtime::new().unwrap();
|
||||
b.to_async(&rt).iter(|| {
|
||||
let storage = storage.clone();
|
||||
let num_clients = *num_clients;
|
||||
async move {
|
||||
let mut tasks = Vec::new();
|
||||
|
||||
for _client_id in 0..num_clients {
|
||||
let storage = storage.clone();
|
||||
let task = tokio::spawn(async move {
|
||||
let mut cursor = 0u64;
|
||||
let mut total = 0;
|
||||
loop {
|
||||
let (next_cursor, items) = storage
|
||||
.scan(cursor, None, Some(100))
|
||||
.unwrap();
|
||||
total += items.len();
|
||||
if next_cursor == 0 {
|
||||
break;
|
||||
}
|
||||
cursor = next_cursor;
|
||||
}
|
||||
total
|
||||
});
|
||||
tasks.push(task);
|
||||
}
|
||||
|
||||
for task in tasks {
|
||||
task.await.unwrap();
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
criterion_group!(
|
||||
benches,
|
||||
bench_concurrent_writes,
|
||||
bench_concurrent_reads,
|
||||
bench_concurrent_mixed,
|
||||
bench_concurrent_hash_ops,
|
||||
bench_concurrent_list_ops,
|
||||
bench_concurrent_scans,
|
||||
);
|
||||
|
||||
criterion_main!(benches);
|
||||
337
benches/memory_profile.rs
Normal file
337
benches/memory_profile.rs
Normal file
@@ -0,0 +1,337 @@
|
||||
// benches/memory_profile.rs
|
||||
use criterion::{criterion_group, criterion_main, Criterion, BenchmarkId, BatchSize};
|
||||
use std::alloc::{GlobalAlloc, Layout, System};
|
||||
use std::sync::atomic::{AtomicUsize, Ordering};
|
||||
|
||||
mod common;
|
||||
use common::*;
|
||||
|
||||
// Simple memory tracking allocator
|
||||
struct TrackingAllocator;
|
||||
|
||||
static ALLOCATED: AtomicUsize = AtomicUsize::new(0);
|
||||
static DEALLOCATED: AtomicUsize = AtomicUsize::new(0);
|
||||
static PEAK: AtomicUsize = AtomicUsize::new(0);
|
||||
static ALLOC_COUNT: AtomicUsize = AtomicUsize::new(0);
|
||||
|
||||
unsafe impl GlobalAlloc for TrackingAllocator {
|
||||
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
|
||||
let ret = System.alloc(layout);
|
||||
if !ret.is_null() {
|
||||
let size = layout.size();
|
||||
ALLOCATED.fetch_add(size, Ordering::SeqCst);
|
||||
ALLOC_COUNT.fetch_add(1, Ordering::SeqCst);
|
||||
|
||||
// Update peak if necessary
|
||||
let current = ALLOCATED.load(Ordering::SeqCst) - DEALLOCATED.load(Ordering::SeqCst);
|
||||
let mut peak = PEAK.load(Ordering::SeqCst);
|
||||
while current > peak {
|
||||
match PEAK.compare_exchange_weak(peak, current, Ordering::SeqCst, Ordering::SeqCst) {
|
||||
Ok(_) => break,
|
||||
Err(x) => peak = x,
|
||||
}
|
||||
}
|
||||
}
|
||||
ret
|
||||
}
|
||||
|
||||
unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
|
||||
System.dealloc(ptr, layout);
|
||||
DEALLOCATED.fetch_add(layout.size(), Ordering::SeqCst);
|
||||
}
|
||||
}
|
||||
|
||||
#[global_allocator]
|
||||
static GLOBAL: TrackingAllocator = TrackingAllocator;
|
||||
|
||||
/// Reset memory tracking counters
|
||||
fn reset_memory_tracking() {
|
||||
ALLOCATED.store(0, Ordering::SeqCst);
|
||||
DEALLOCATED.store(0, Ordering::SeqCst);
|
||||
PEAK.store(0, Ordering::SeqCst);
|
||||
ALLOC_COUNT.store(0, Ordering::SeqCst);
|
||||
}
|
||||
|
||||
/// Get current memory stats
|
||||
fn get_memory_stats() -> (usize, usize, usize) {
|
||||
let allocated = ALLOCATED.load(Ordering::SeqCst);
|
||||
let deallocated = DEALLOCATED.load(Ordering::SeqCst);
|
||||
let peak = PEAK.load(Ordering::SeqCst);
|
||||
let alloc_count = ALLOC_COUNT.load(Ordering::SeqCst);
|
||||
|
||||
let current = allocated.saturating_sub(deallocated);
|
||||
(current, peak, alloc_count)
|
||||
}
|
||||
|
||||
/// Profile memory usage for single SET operations
|
||||
fn profile_memory_set(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("memory_profile/set");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend_type.name(), "100bytes"),
|
||||
&backend_type,
|
||||
|b, &backend_type| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
reset_memory_tracking();
|
||||
let backend = BenchmarkBackend::new(backend_type).unwrap();
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:key", 0);
|
||||
let value = generator.generate_value(100);
|
||||
(backend, key, value)
|
||||
},
|
||||
|(backend, key, value)| {
|
||||
backend.storage.set(key, value).unwrap();
|
||||
let (current, peak, allocs) = get_memory_stats();
|
||||
println!("{}: current={}, peak={}, allocs={}",
|
||||
backend.name(), current, peak, allocs);
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Profile memory usage for single GET operations
|
||||
fn profile_memory_get(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("memory_profile/get");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend(backend_type, 1_000, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "100bytes"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
reset_memory_tracking();
|
||||
generator.generate_key("bench:key", 0)
|
||||
},
|
||||
|key| {
|
||||
backend.storage.get(&key).unwrap();
|
||||
let (current, peak, allocs) = get_memory_stats();
|
||||
println!("{}: current={}, peak={}, allocs={}",
|
||||
backend.name(), current, peak, allocs);
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Profile memory usage for bulk insert operations
|
||||
fn profile_memory_bulk_insert(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("memory_profile/bulk_insert");
|
||||
|
||||
for size in [100, 1_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend_type.name()), size),
|
||||
&(backend_type, size),
|
||||
|b, &(backend_type, size)| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
reset_memory_tracking();
|
||||
let backend = BenchmarkBackend::new(backend_type).unwrap();
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let data = generator.generate_string_pairs(size, 100);
|
||||
(backend, data)
|
||||
},
|
||||
|(backend, data)| {
|
||||
for (key, value) in data {
|
||||
backend.storage.set(key, value).unwrap();
|
||||
}
|
||||
let (current, peak, allocs) = get_memory_stats();
|
||||
println!("{} (n={}): current={}, peak={}, allocs={}, bytes_per_record={}",
|
||||
backend.name(), size, current, peak, allocs, peak / size);
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Profile memory usage for hash operations
|
||||
fn profile_memory_hash_ops(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("memory_profile/hash_ops");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend_type.name(), "hset"),
|
||||
&backend_type,
|
||||
|b, &backend_type| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
reset_memory_tracking();
|
||||
let backend = BenchmarkBackend::new(backend_type).unwrap();
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:hash", 0);
|
||||
let fields = vec![
|
||||
("field1".to_string(), generator.generate_value(100)),
|
||||
("field2".to_string(), generator.generate_value(100)),
|
||||
("field3".to_string(), generator.generate_value(100)),
|
||||
];
|
||||
(backend, key, fields)
|
||||
},
|
||||
|(backend, key, fields)| {
|
||||
backend.storage.hset(&key, fields).unwrap();
|
||||
let (current, peak, allocs) = get_memory_stats();
|
||||
println!("{}: current={}, peak={}, allocs={}",
|
||||
backend.name(), current, peak, allocs);
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Profile memory usage for list operations
|
||||
fn profile_memory_list_ops(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("memory_profile/list_ops");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend_type.name(), "rpush"),
|
||||
&backend_type,
|
||||
|b, &backend_type| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
reset_memory_tracking();
|
||||
let backend = BenchmarkBackend::new(backend_type).unwrap();
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:list", 0);
|
||||
let elements = vec![
|
||||
generator.generate_value(100),
|
||||
generator.generate_value(100),
|
||||
generator.generate_value(100),
|
||||
];
|
||||
(backend, key, elements)
|
||||
},
|
||||
|(backend, key, elements)| {
|
||||
backend.storage.rpush(&key, elements).unwrap();
|
||||
let (current, peak, allocs) = get_memory_stats();
|
||||
println!("{}: current={}, peak={}, allocs={}",
|
||||
backend.name(), current, peak, allocs);
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Profile memory usage for scan operations
|
||||
fn profile_memory_scan(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("memory_profile/scan");
|
||||
|
||||
for size in [1_000, 10_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend(backend_type, size, 100)
|
||||
.expect("Failed to setup backend");
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend.name()), size),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter(|| {
|
||||
reset_memory_tracking();
|
||||
let mut cursor = 0u64;
|
||||
let mut total = 0;
|
||||
loop {
|
||||
let (next_cursor, items) = backend.storage
|
||||
.scan(cursor, None, Some(100))
|
||||
.unwrap();
|
||||
total += items.len();
|
||||
if next_cursor == 0 {
|
||||
break;
|
||||
}
|
||||
cursor = next_cursor;
|
||||
}
|
||||
let (current, peak, allocs) = get_memory_stats();
|
||||
println!("{} (n={}): scanned={}, current={}, peak={}, allocs={}",
|
||||
backend.name(), size, total, current, peak, allocs);
|
||||
total
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Profile memory efficiency (bytes per record stored)
|
||||
fn profile_memory_efficiency(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("memory_profile/efficiency");
|
||||
|
||||
for size in [1_000, 10_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend_type.name()), size),
|
||||
&(backend_type, size),
|
||||
|b, &(backend_type, size)| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
reset_memory_tracking();
|
||||
let backend = BenchmarkBackend::new(backend_type).unwrap();
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let data = generator.generate_string_pairs(size, 100);
|
||||
(backend, data)
|
||||
},
|
||||
|(backend, data)| {
|
||||
let data_size: usize = data.iter()
|
||||
.map(|(k, v)| k.len() + v.len())
|
||||
.sum();
|
||||
|
||||
for (key, value) in data {
|
||||
backend.storage.set(key, value).unwrap();
|
||||
}
|
||||
|
||||
let (current, peak, allocs) = get_memory_stats();
|
||||
let overhead_pct = ((peak as f64 - data_size as f64) / data_size as f64) * 100.0;
|
||||
|
||||
println!("{} (n={}): data_size={}, peak={}, overhead={:.1}%, bytes_per_record={}, allocs={}",
|
||||
backend.name(), size, data_size, peak, overhead_pct,
|
||||
peak / size, allocs);
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
criterion_group!(
|
||||
benches,
|
||||
profile_memory_set,
|
||||
profile_memory_get,
|
||||
profile_memory_bulk_insert,
|
||||
profile_memory_hash_ops,
|
||||
profile_memory_list_ops,
|
||||
profile_memory_scan,
|
||||
profile_memory_efficiency,
|
||||
);
|
||||
|
||||
criterion_main!(benches);
|
||||
339
benches/scan_ops.rs
Normal file
339
benches/scan_ops.rs
Normal file
@@ -0,0 +1,339 @@
|
||||
// benches/scan_ops.rs
|
||||
use criterion::{criterion_group, criterion_main, Criterion, BenchmarkId};
|
||||
|
||||
mod common;
|
||||
use common::*;
|
||||
|
||||
/// Benchmark SCAN operation - full database scan
|
||||
fn bench_scan_full(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("scan_ops/scan_full");
|
||||
|
||||
for size in [1_000, 10_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend(backend_type, size, 100)
|
||||
.expect("Failed to setup backend");
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend.name()), size),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter(|| {
|
||||
let mut cursor = 0u64;
|
||||
let mut total = 0;
|
||||
loop {
|
||||
let (next_cursor, items) = backend.storage
|
||||
.scan(cursor, None, Some(100))
|
||||
.unwrap();
|
||||
total += items.len();
|
||||
if next_cursor == 0 {
|
||||
break;
|
||||
}
|
||||
cursor = next_cursor;
|
||||
}
|
||||
total
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark SCAN operation with pattern matching
|
||||
fn bench_scan_pattern(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("scan_ops/scan_pattern");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
// Create backend with mixed key patterns
|
||||
let backend = BenchmarkBackend::new(backend_type).expect("Failed to create backend");
|
||||
let mut generator = DataGenerator::new(42);
|
||||
|
||||
// Insert keys with different patterns
|
||||
for i in 0..3_000 {
|
||||
let key = if i < 1_000 {
|
||||
format!("user:{}:profile", i)
|
||||
} else if i < 2_000 {
|
||||
format!("session:{}:data", i - 1_000)
|
||||
} else {
|
||||
format!("cache:{}:value", i - 2_000)
|
||||
};
|
||||
let value = generator.generate_value(100);
|
||||
backend.storage.set(key, value).unwrap();
|
||||
}
|
||||
|
||||
// Benchmark pattern matching
|
||||
for pattern in ["user:*", "session:*", "cache:*"] {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/pattern", backend.name()), pattern),
|
||||
&(backend.storage.clone(), pattern),
|
||||
|b, (storage, pattern)| {
|
||||
b.iter(|| {
|
||||
let mut cursor = 0u64;
|
||||
let mut total = 0;
|
||||
loop {
|
||||
let (next_cursor, items) = storage
|
||||
.scan(cursor, Some(pattern), Some(100))
|
||||
.unwrap();
|
||||
total += items.len();
|
||||
if next_cursor == 0 {
|
||||
break;
|
||||
}
|
||||
cursor = next_cursor;
|
||||
}
|
||||
total
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark HSCAN operation - scan hash fields
|
||||
fn bench_hscan(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("scan_ops/hscan");
|
||||
|
||||
for fields_count in [10, 100] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend_hashes(backend_type, 100, fields_count, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:hash", 0);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/fields", backend.name()), fields_count),
|
||||
&(backend, key),
|
||||
|b, (backend, key)| {
|
||||
b.iter(|| {
|
||||
let mut cursor = 0u64;
|
||||
let mut total = 0;
|
||||
loop {
|
||||
let (next_cursor, items) = backend.storage
|
||||
.hscan(key, cursor, None, Some(10))
|
||||
.unwrap();
|
||||
total += items.len();
|
||||
if next_cursor == 0 {
|
||||
break;
|
||||
}
|
||||
cursor = next_cursor;
|
||||
}
|
||||
total
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark HSCAN with pattern matching
|
||||
fn bench_hscan_pattern(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("scan_ops/hscan_pattern");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = BenchmarkBackend::new(backend_type).expect("Failed to create backend");
|
||||
let mut generator = DataGenerator::new(42);
|
||||
|
||||
// Create a hash with mixed field patterns
|
||||
let key = "bench:hash:0".to_string();
|
||||
let mut fields = Vec::new();
|
||||
for i in 0..100 {
|
||||
let field = if i < 33 {
|
||||
format!("user_{}", i)
|
||||
} else if i < 66 {
|
||||
format!("session_{}", i - 33)
|
||||
} else {
|
||||
format!("cache_{}", i - 66)
|
||||
};
|
||||
let value = generator.generate_value(100);
|
||||
fields.push((field, value));
|
||||
}
|
||||
backend.storage.hset(&key, fields).unwrap();
|
||||
|
||||
// Benchmark pattern matching
|
||||
for pattern in ["user_*", "session_*", "cache_*"] {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/pattern", backend.name()), pattern),
|
||||
&(backend.storage.clone(), key.clone(), pattern),
|
||||
|b, (storage, key, pattern)| {
|
||||
b.iter(|| {
|
||||
let mut cursor = 0u64;
|
||||
let mut total = 0;
|
||||
loop {
|
||||
let (next_cursor, items) = storage
|
||||
.hscan(key, cursor, Some(pattern), Some(10))
|
||||
.unwrap();
|
||||
total += items.len();
|
||||
if next_cursor == 0 {
|
||||
break;
|
||||
}
|
||||
cursor = next_cursor;
|
||||
}
|
||||
total
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark KEYS operation with various patterns
|
||||
fn bench_keys_operation(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("scan_ops/keys");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
// Create backend with mixed key patterns
|
||||
let backend = BenchmarkBackend::new(backend_type).expect("Failed to create backend");
|
||||
let mut generator = DataGenerator::new(42);
|
||||
|
||||
// Insert keys with different patterns
|
||||
for i in 0..3_000 {
|
||||
let key = if i < 1_000 {
|
||||
format!("user:{}:profile", i)
|
||||
} else if i < 2_000 {
|
||||
format!("session:{}:data", i - 1_000)
|
||||
} else {
|
||||
format!("cache:{}:value", i - 2_000)
|
||||
};
|
||||
let value = generator.generate_value(100);
|
||||
backend.storage.set(key, value).unwrap();
|
||||
}
|
||||
|
||||
// Benchmark different patterns
|
||||
for pattern in ["*", "user:*", "session:*", "*:profile", "user:*:profile"] {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/pattern", backend.name()), pattern),
|
||||
&(backend.storage.clone(), pattern),
|
||||
|b, (storage, pattern)| {
|
||||
b.iter(|| {
|
||||
storage.keys(pattern).unwrap()
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark DBSIZE operation
|
||||
fn bench_dbsize(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("scan_ops/dbsize");
|
||||
|
||||
for size in [1_000, 10_000] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend(backend_type, size, 100)
|
||||
.expect("Failed to setup backend");
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/size", backend.name()), size),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter(|| {
|
||||
backend.storage.dbsize().unwrap()
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark LRANGE with different range sizes
|
||||
fn bench_lrange_sizes(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("scan_ops/lrange");
|
||||
|
||||
for range_size in [10, 50, 100] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend_lists(backend_type, 100, 100, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:list", 0);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/range", backend.name()), range_size),
|
||||
&(backend, key, range_size),
|
||||
|b, (backend, key, range_size)| {
|
||||
b.iter(|| {
|
||||
backend.storage.lrange(key, 0, (*range_size - 1) as i64).unwrap()
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark HKEYS operation
|
||||
fn bench_hkeys(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("scan_ops/hkeys");
|
||||
|
||||
for fields_count in [10, 50, 100] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend_hashes(backend_type, 100, fields_count, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:hash", 0);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/fields", backend.name()), fields_count),
|
||||
&(backend, key),
|
||||
|b, (backend, key)| {
|
||||
b.iter(|| {
|
||||
backend.storage.hkeys(key).unwrap()
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark HVALS operation
|
||||
fn bench_hvals(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("scan_ops/hvals");
|
||||
|
||||
for fields_count in [10, 50, 100] {
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend_hashes(backend_type, 100, fields_count, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:hash", 0);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(format!("{}/fields", backend.name()), fields_count),
|
||||
&(backend, key),
|
||||
|b, (backend, key)| {
|
||||
b.iter(|| {
|
||||
backend.storage.hvals(key).unwrap()
|
||||
});
|
||||
}
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
criterion_group!(
|
||||
benches,
|
||||
bench_scan_full,
|
||||
bench_scan_pattern,
|
||||
bench_hscan,
|
||||
bench_hscan_pattern,
|
||||
bench_keys_operation,
|
||||
bench_dbsize,
|
||||
bench_lrange_sizes,
|
||||
bench_hkeys,
|
||||
bench_hvals,
|
||||
);
|
||||
|
||||
criterion_main!(benches);
|
||||
444
benches/single_ops.rs
Normal file
444
benches/single_ops.rs
Normal file
@@ -0,0 +1,444 @@
|
||||
// benches/single_ops.rs
|
||||
use criterion::{criterion_group, criterion_main, Criterion, BenchmarkId, BatchSize};
|
||||
|
||||
mod common;
|
||||
use common::*;
|
||||
|
||||
/// Benchmark string SET operations
|
||||
fn bench_string_set(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/strings/set");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = BenchmarkBackend::new(backend_type).expect("Failed to create backend");
|
||||
let mut generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "100bytes"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let key = generator.generate_key("bench:key", rand::random::<usize>() % 100000);
|
||||
let value = generator.generate_value(100);
|
||||
(key, value)
|
||||
},
|
||||
|(key, value)| {
|
||||
backend.storage.set(key, value).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark string GET operations
|
||||
fn bench_string_get(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/strings/get");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
// Pre-populate with 10K keys
|
||||
let backend = setup_populated_backend(backend_type, 10_000, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "100bytes"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let key_id = rand::random::<usize>() % 10_000;
|
||||
generator.generate_key("bench:key", key_id)
|
||||
},
|
||||
|key| {
|
||||
backend.storage.get(&key).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark string DEL operations
|
||||
fn bench_string_del(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/strings/del");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend_type.name(), "100bytes"),
|
||||
&backend_type,
|
||||
|b, &backend_type| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
// Create fresh backend with one key for each iteration
|
||||
let backend = BenchmarkBackend::new(backend_type).unwrap();
|
||||
let mut generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:key", 0);
|
||||
let value = generator.generate_value(100);
|
||||
backend.storage.set(key.clone(), value).unwrap();
|
||||
(backend, key)
|
||||
},
|
||||
|(backend, key)| {
|
||||
backend.storage.del(key).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark string EXISTS operations
|
||||
fn bench_string_exists(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/strings/exists");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend(backend_type, 10_000, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "100bytes"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let key_id = rand::random::<usize>() % 10_000;
|
||||
generator.generate_key("bench:key", key_id)
|
||||
},
|
||||
|key| {
|
||||
backend.storage.exists(&key).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark hash HSET operations
|
||||
fn bench_hash_hset(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/hashes/hset");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = BenchmarkBackend::new(backend_type).expect("Failed to create backend");
|
||||
let mut generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "single_field"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let key = generator.generate_key("bench:hash", rand::random::<usize>() % 1000);
|
||||
let field = format!("field{}", rand::random::<usize>() % 100);
|
||||
let value = generator.generate_value(100);
|
||||
(key, field, value)
|
||||
},
|
||||
|(key, field, value)| {
|
||||
backend.storage.hset(&key, vec![(field, value)]).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark hash HGET operations
|
||||
fn bench_hash_hget(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/hashes/hget");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
// Pre-populate with hashes
|
||||
let backend = setup_populated_backend_hashes(backend_type, 1_000, 10, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "single_field"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let key = generator.generate_key("bench:hash", rand::random::<usize>() % 1_000);
|
||||
let field = format!("field{}", rand::random::<usize>() % 10);
|
||||
(key, field)
|
||||
},
|
||||
|(key, field)| {
|
||||
backend.storage.hget(&key, &field).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark hash HGETALL operations
|
||||
fn bench_hash_hgetall(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/hashes/hgetall");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend_hashes(backend_type, 1_000, 10, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "10_fields"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
generator.generate_key("bench:hash", rand::random::<usize>() % 1_000)
|
||||
},
|
||||
|key| {
|
||||
backend.storage.hgetall(&key).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark hash HDEL operations
|
||||
fn bench_hash_hdel(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/hashes/hdel");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend_type.name(), "single_field"),
|
||||
&backend_type,
|
||||
|b, &backend_type| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let backend = setup_populated_backend_hashes(backend_type, 1, 10, 100).unwrap();
|
||||
let generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:hash", 0);
|
||||
let field = format!("field{}", rand::random::<usize>() % 10);
|
||||
(backend, key, field)
|
||||
},
|
||||
|(backend, key, field)| {
|
||||
backend.storage.hdel(&key, vec![field]).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark hash HEXISTS operations
|
||||
fn bench_hash_hexists(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/hashes/hexists");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend_hashes(backend_type, 1_000, 10, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "single_field"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let key = generator.generate_key("bench:hash", rand::random::<usize>() % 1_000);
|
||||
let field = format!("field{}", rand::random::<usize>() % 10);
|
||||
(key, field)
|
||||
},
|
||||
|(key, field)| {
|
||||
backend.storage.hexists(&key, &field).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark list LPUSH operations
|
||||
fn bench_list_lpush(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/lists/lpush");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = BenchmarkBackend::new(backend_type).expect("Failed to create backend");
|
||||
let mut generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "single_element"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let key = generator.generate_key("bench:list", rand::random::<usize>() % 1000);
|
||||
let element = generator.generate_value(100);
|
||||
(key, element)
|
||||
},
|
||||
|(key, element)| {
|
||||
backend.storage.lpush(&key, vec![element]).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark list RPUSH operations
|
||||
fn bench_list_rpush(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/lists/rpush");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = BenchmarkBackend::new(backend_type).expect("Failed to create backend");
|
||||
let mut generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "single_element"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let key = generator.generate_key("bench:list", rand::random::<usize>() % 1000);
|
||||
let element = generator.generate_value(100);
|
||||
(key, element)
|
||||
},
|
||||
|(key, element)| {
|
||||
backend.storage.rpush(&key, vec![element]).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark list LPOP operations
|
||||
fn bench_list_lpop(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/lists/lpop");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend_type.name(), "single_element"),
|
||||
&backend_type,
|
||||
|b, &backend_type| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let backend = setup_populated_backend_lists(backend_type, 1, 100, 100).unwrap();
|
||||
let generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:list", 0);
|
||||
(backend, key)
|
||||
},
|
||||
|(backend, key)| {
|
||||
backend.storage.lpop(&key, 1).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark list RPOP operations
|
||||
fn bench_list_rpop(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/lists/rpop");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend_type.name(), "single_element"),
|
||||
&backend_type,
|
||||
|b, &backend_type| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
let backend = setup_populated_backend_lists(backend_type, 1, 100, 100).unwrap();
|
||||
let generator = DataGenerator::new(42);
|
||||
let key = generator.generate_key("bench:list", 0);
|
||||
(backend, key)
|
||||
},
|
||||
|(backend, key)| {
|
||||
backend.storage.rpop(&key, 1).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
/// Benchmark list LRANGE operations
|
||||
fn bench_list_lrange(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("single_ops/lists/lrange");
|
||||
|
||||
for backend_type in BackendType::all() {
|
||||
let backend = setup_populated_backend_lists(backend_type, 1_000, 100, 100)
|
||||
.expect("Failed to setup backend");
|
||||
let generator = DataGenerator::new(42);
|
||||
|
||||
group.bench_with_input(
|
||||
BenchmarkId::new(backend.name(), "10_elements"),
|
||||
&backend,
|
||||
|b, backend| {
|
||||
b.iter_batched(
|
||||
|| {
|
||||
generator.generate_key("bench:list", rand::random::<usize>() % 1_000)
|
||||
},
|
||||
|key| {
|
||||
backend.storage.lrange(&key, 0, 9).unwrap();
|
||||
},
|
||||
BatchSize::SmallInput
|
||||
);
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
criterion_group!(
|
||||
benches,
|
||||
bench_string_set,
|
||||
bench_string_get,
|
||||
bench_string_del,
|
||||
bench_string_exists,
|
||||
bench_hash_hset,
|
||||
bench_hash_hget,
|
||||
bench_hash_hgetall,
|
||||
bench_hash_hdel,
|
||||
bench_hash_hexists,
|
||||
bench_list_lpush,
|
||||
bench_list_rpush,
|
||||
bench_list_lpop,
|
||||
bench_list_rpop,
|
||||
bench_list_lrange,
|
||||
);
|
||||
|
||||
criterion_main!(benches);
|
||||
Reference in New Issue
Block a user