vestige/crates/vestige-core/benches/search_bench.rs
Sam Valladares 8178beb961 feat(v2.0.5): Intentional Amnesia — active forgetting via top-down inhibitory control
First AI memory system to model forgetting as a neuroscience-grounded
PROCESS rather than passive decay. Adds the `suppress` MCP tool (#24),
Rac1 cascade worker, migration V10, and dashboard forgetting indicators.

Based on:
- Anderson, Hanslmayr & Quaegebeur (2025), Nat Rev Neurosci — right
  lateral PFC as the domain-general inhibitory controller; SIF
  compounds with each stopping attempt.
- Cervantes-Sandoval et al. (2020), Front Cell Neurosci PMC7477079 —
  Rac1 GTPase as the active synaptic destabilization mechanism.

What's new:
* `suppress` MCP tool — each call compounds `suppression_count` and
  subtracts a `0.15 × count` penalty (saturating at 80%) from
  retrieval scores during hybrid search. Distinct from delete
  (removes) and demote (one-shot).
* Rac1 cascade worker — background sweep piggybacks the 6h
  consolidation loop, walks `memory_connections` edges from
  recently-suppressed seeds, applies attenuated FSRS decay to
  co-activated neighbors. You don't just forget Jake — you fade
  the café, the roommate, the birthday.
* 24h labile window — reversible via `suppress({id, reverse: true})`
  within 24 hours. Matches Nader reconsolidation semantics.
* Migration V10 — additive-only (`suppression_count`, `suppressed_at`
  + partial indices). All v2.0.x DBs upgrade seamlessly on first launch.
* Dashboard: `ForgettingIndicator.svelte` pulses when suppressions
  are active. 3D graph nodes dim to 20% opacity when suppressed.
  New WebSocket events: `MemorySuppressed`, `MemoryUnsuppressed`,
  `Rac1CascadeSwept`. Heartbeat carries `suppressed_count`.
* Search pipeline: SIF penalty inserted into the accessibility stage
  so it stacks on top of passive FSRS decay.
* Tool count bumped 23 → 24. Cognitive modules 29 → 30.

Memories persist — they are INHIBITED, not erased. `memory.get(id)`
returns full content through any number of suppressions. The 24h
labile window is a grace period for regret.

Also fixes issue #31 (dashboard graph view buggy) as a companion UI
bug discovered during the v2.0.5 audit cycle:

* Root cause: node glow `SpriteMaterial` had no `map`, so
  `THREE.Sprite` rendered as a solid-coloured 1×1 plane. Additive
  blending + `UnrealBloomPass(0.8, 0.4, 0.85)` amplified the square
  edges into hard-edged glowing cubes.
* Fix: shared 128×128 radial-gradient `CanvasTexture` singleton used
  as the sprite map. Retuned bloom to `(0.55, 0.6, 0.2)`. Halved fog
  density (0.008 → 0.0035). Edges bumped from dark navy `0x4a4a7a`
  to brand violet `0x8b5cf6` with higher opacity. Added explicit
  `scene.background` and a 2000-point starfield for depth.
* 21 regression tests added in `ui-fixes.test.ts` locking every
  invariant in (shared texture singleton, depthWrite:false, scale
  ×6, bloom magic numbers via source regex, starfield presence).

Tests: 1,284 Rust (+47) + 171 Vitest (+21) = 1,455 total, 0 failed
Clippy: clean across all targets, zero warnings
Release binary: 22.6MB, `cargo build --release -p vestige-mcp` green
Versions: workspace aligned at 2.0.5 across all 6 crates/packages

Closes #31
2026-04-14 17:30:30 -05:00

122 lines
3.4 KiB
Rust

//! Vestige Search Benchmarks
//!
//! Benchmarks for core search operations using Criterion.
//! Run with: cargo bench -p vestige-core
use criterion::{Criterion, black_box, criterion_group, criterion_main};
use vestige_core::embeddings::cosine_similarity;
use vestige_core::search::hyde::{centroid_embedding, classify_intent, expand_query};
use vestige_core::search::{linear_combination, reciprocal_rank_fusion, sanitize_fts5_query};
fn bench_classify_intent(c: &mut Criterion) {
let queries = [
"What is FSRS?",
"how to configure embeddings",
"why does retention decay",
"fn main()",
"vestige memory system",
];
c.bench_function("classify_intent", |b| {
b.iter(|| {
for q in &queries {
black_box(classify_intent(q));
}
})
});
}
fn bench_expand_query(c: &mut Criterion) {
c.bench_function("expand_query", |b| {
b.iter(|| {
black_box(expand_query(
"What is spaced repetition and how does FSRS work?",
));
})
});
}
fn bench_centroid_embedding(c: &mut Criterion) {
// Simulate 4 embeddings of 256 dimensions
let embeddings: Vec<Vec<f32>> = (0..4)
.map(|i| (0..256).map(|j| ((i * 256 + j) as f32).sin()).collect())
.collect();
c.bench_function("centroid_256d_4vecs", |b| {
b.iter(|| {
black_box(centroid_embedding(&embeddings));
})
});
}
fn bench_rrf_fusion(c: &mut Criterion) {
let keyword_results: Vec<(String, f32)> = (0..50)
.map(|i| (format!("doc-{i}"), 1.0 - i as f32 / 50.0))
.collect();
let semantic_results: Vec<(String, f32)> = (0..50)
.map(|i| (format!("doc-{}", 25 + i), 1.0 - i as f32 / 50.0))
.collect();
c.bench_function("rrf_50x50", |b| {
b.iter(|| {
black_box(reciprocal_rank_fusion(
&keyword_results,
&semantic_results,
60.0,
));
})
});
}
fn bench_linear_combination(c: &mut Criterion) {
let keyword_results: Vec<(String, f32)> = (0..50)
.map(|i| (format!("doc-{i}"), 1.0 - i as f32 / 50.0))
.collect();
let semantic_results: Vec<(String, f32)> = (0..50)
.map(|i| (format!("doc-{}", 25 + i), 1.0 - i as f32 / 50.0))
.collect();
c.bench_function("linear_combo_50x50", |b| {
b.iter(|| {
black_box(linear_combination(
&keyword_results,
&semantic_results,
0.3,
0.7,
));
})
});
}
fn bench_sanitize_fts5(c: &mut Criterion) {
c.bench_function("sanitize_fts5_query", |b| {
b.iter(|| {
black_box(sanitize_fts5_query(
"hello world \"exact phrase\" OR special-chars!@#",
));
})
});
}
fn bench_cosine_similarity(c: &mut Criterion) {
let a: Vec<f32> = (0..256).map(|i| (i as f32).sin()).collect();
let b: Vec<f32> = (0..256).map(|i| (i as f32).cos()).collect();
c.bench_function("cosine_similarity_256d", |b_bench| {
b_bench.iter(|| {
black_box(cosine_similarity(&a, &b));
})
});
}
criterion_group!(
benches,
bench_classify_intent,
bench_expand_query,
bench_centroid_embedding,
bench_rrf_fusion,
bench_linear_combination,
bench_sanitize_fts5,
bench_cosine_similarity,
);
criterion_main!(benches);