feat(v2.0.5): Intentional Amnesia — active forgetting via top-down inhibitory control

First AI memory system to model forgetting as a neuroscience-grounded
PROCESS rather than passive decay. Adds the `suppress` MCP tool (#24),
Rac1 cascade worker, migration V10, and dashboard forgetting indicators.

Based on:
- Anderson, Hanslmayr & Quaegebeur (2025), Nat Rev Neurosci — right
  lateral PFC as the domain-general inhibitory controller; SIF
  compounds with each stopping attempt.
- Cervantes-Sandoval et al. (2020), Front Cell Neurosci PMC7477079 —
  Rac1 GTPase as the active synaptic destabilization mechanism.

What's new:
* `suppress` MCP tool — each call compounds `suppression_count` and
  subtracts a `0.15 × count` penalty (saturating at 80%) from
  retrieval scores during hybrid search. Distinct from delete
  (removes) and demote (one-shot).
* Rac1 cascade worker — background sweep piggybacks the 6h
  consolidation loop, walks `memory_connections` edges from
  recently-suppressed seeds, applies attenuated FSRS decay to
  co-activated neighbors. You don't just forget Jake — you fade
  the café, the roommate, the birthday.
* 24h labile window — reversible via `suppress({id, reverse: true})`
  within 24 hours. Matches Nader reconsolidation semantics.
* Migration V10 — additive-only (`suppression_count`, `suppressed_at`
  + partial indices). All v2.0.x DBs upgrade seamlessly on first launch.
* Dashboard: `ForgettingIndicator.svelte` pulses when suppressions
  are active. 3D graph nodes dim to 20% opacity when suppressed.
  New WebSocket events: `MemorySuppressed`, `MemoryUnsuppressed`,
  `Rac1CascadeSwept`. Heartbeat carries `suppressed_count`.
* Search pipeline: SIF penalty inserted into the accessibility stage
  so it stacks on top of passive FSRS decay.
* Tool count bumped 23 → 24. Cognitive modules 29 → 30.

Memories persist — they are INHIBITED, not erased. `memory.get(id)`
returns full content through any number of suppressions. The 24h
labile window is a grace period for regret.

Also fixes issue #31 (dashboard graph view buggy) as a companion UI
bug discovered during the v2.0.5 audit cycle:

* Root cause: node glow `SpriteMaterial` had no `map`, so
  `THREE.Sprite` rendered as a solid-coloured 1×1 plane. Additive
  blending + `UnrealBloomPass(0.8, 0.4, 0.85)` amplified the square
  edges into hard-edged glowing cubes.
* Fix: shared 128×128 radial-gradient `CanvasTexture` singleton used
  as the sprite map. Retuned bloom to `(0.55, 0.6, 0.2)`. Halved fog
  density (0.008 → 0.0035). Edges bumped from dark navy `0x4a4a7a`
  to brand violet `0x8b5cf6` with higher opacity. Added explicit
  `scene.background` and a 2000-point starfield for depth.
* 21 regression tests added in `ui-fixes.test.ts` locking every
  invariant in (shared texture singleton, depthWrite:false, scale
  ×6, bloom magic numbers via source regex, starfield presence).

Tests: 1,284 Rust (+47) + 171 Vitest (+21) = 1,455 total, 0 failed
Clippy: clean across all targets, zero warnings
Release binary: 22.6MB, `cargo build --release -p vestige-mcp` green
Versions: workspace aligned at 2.0.5 across all 6 crates/packages

Closes #31
This commit is contained in:
Sam Valladares 2026-04-14 17:30:30 -05:00
parent 95bde93b49
commit 8178beb961
359 changed files with 8277 additions and 3416 deletions

View file

@ -271,16 +271,11 @@ macro_rules! assert_search_count {
#[macro_export]
macro_rules! assert_search_order {
($results:expr, $expected_first:expr) => {
assert!(
!$results.is_empty(),
"Expected non-empty search results"
);
assert!(!$results.is_empty(), "Expected non-empty search results");
assert_eq!(
$results[0].id,
$expected_first,
$results[0].id, $expected_first,
"Expected first result to be {}, got {}",
$expected_first,
$results[0].id
$expected_first, $results[0].id
);
};
}

View file

@ -6,9 +6,9 @@
//! - Database snapshots and restoration
//! - Concurrent test isolation
use vestige_core::{KnowledgeNode, Rating, Storage};
use std::path::PathBuf;
use tempfile::TempDir;
use vestige_core::{KnowledgeNode, Rating, Storage};
/// Helper to create IngestInput (works around non_exhaustive)
#[allow(clippy::too_many_arguments)]
@ -107,10 +107,7 @@ impl TestDatabaseManager {
/// Get the number of nodes in the database
pub fn node_count(&self) -> i64 {
self.storage
.get_stats()
.map(|s| s.total_nodes)
.unwrap_or(0)
self.storage.get_stats().map(|s| s.total_nodes).unwrap_or(0)
}
// ========================================================================
@ -257,10 +254,7 @@ impl TestDatabaseManager {
/// Take a snapshot of current database state
pub fn take_snapshot(&mut self) {
let nodes = self
.storage
.get_all_nodes(10000, 0)
.unwrap_or_default();
let nodes = self.storage.get_all_nodes(10000, 0).unwrap_or_default();
self.snapshot = Some(nodes);
}
@ -322,8 +316,8 @@ impl TestDatabaseManager {
let _ = std::fs::remove_file(&self.db_path);
// Recreate storage
self.storage = Storage::new(Some(self.db_path.clone()))
.expect("Failed to recreate storage");
self.storage =
Storage::new(Some(self.db_path.clone())).expect("Failed to recreate storage");
}
}

View file

@ -183,7 +183,13 @@ impl TestDataFactory {
/// Create a batch of memories
pub fn create_batch(storage: &mut Storage, count: usize) -> Vec<String> {
Self::create_batch_with_config(storage, BatchConfig { count, ..Default::default() })
Self::create_batch_with_config(
storage,
BatchConfig {
count,
..Default::default()
},
)
}
/// Create a batch with custom configuration
@ -212,9 +218,15 @@ impl TestDataFactory {
let (valid_from, valid_until) = if config.with_temporal {
let now = Utc::now();
if i % 3 == 0 {
(Some(now - Duration::days(30)), Some(now + Duration::days(30)))
(
Some(now - Duration::days(30)),
Some(now + Duration::days(30)),
)
} else if i % 3 == 1 {
(Some(now - Duration::days(60)), Some(now - Duration::days(30)))
(
Some(now - Duration::days(60)),
Some(now - Duration::days(30)),
)
} else {
(None, None)
}
@ -273,12 +285,7 @@ impl TestDataFactory {
}
// Emotional memory (decay should be affected by sentiment)
let emotional = Self::create_emotional_memory(
storage,
"Important life event",
0.9,
0.95,
);
let emotional = Self::create_emotional_memory(storage, "Important life event", 0.9, 0.95);
if let Some(node) = emotional {
metadata.insert("emotional".to_string(), node.id.clone());
ids.push(node.id);
@ -445,12 +452,8 @@ impl TestDataFactory {
}
// No bounds (always valid)
if let Some(node) = Self::create_temporal_memory(
storage,
"Always valid memory",
None,
None,
) {
if let Some(node) = Self::create_temporal_memory(storage, "Always valid memory", None, None)
{
metadata.insert("always_valid".to_string(), node.id.clone());
ids.push(node.id);
}
@ -469,8 +472,15 @@ impl TestDataFactory {
/// Get a random node type
pub fn random_node_type(seed: usize) -> &'static str {
const TYPES: [&str; 9] = [
"fact", "concept", "procedure", "event", "relationship",
"quote", "code", "question", "insight",
"fact",
"concept",
"procedure",
"event",
"relationship",
"quote",
"code",
"question",
"insight",
];
TYPES[seed % TYPES.len()]
}
@ -478,10 +488,26 @@ impl TestDataFactory {
/// Generate lorem ipsum-like content
pub fn lorem_content(words: usize, seed: usize) -> String {
const WORDS: [&str; 20] = [
"the", "memory", "learning", "knowledge", "algorithm",
"data", "system", "process", "function", "method",
"class", "object", "variable", "constant", "type",
"structure", "pattern", "design", "architecture", "code",
"the",
"memory",
"learning",
"knowledge",
"algorithm",
"data",
"system",
"process",
"function",
"method",
"class",
"object",
"variable",
"constant",
"type",
"structure",
"pattern",
"design",
"architecture",
"code",
];
(0..words)
@ -493,8 +519,16 @@ impl TestDataFactory {
/// Generate tags
pub fn generate_tags(count: usize, seed: usize) -> Vec<String> {
const TAGS: [&str; 10] = [
"important", "review", "todo", "concept", "fact",
"code", "note", "idea", "question", "reference",
"important",
"review",
"todo",
"concept",
"fact",
"code",
"note",
"idea",
"question",
"reference",
];
(0..count)

View file

@ -145,7 +145,11 @@ impl MockEmbeddingService {
// Map word to a sparse set of dimensions
for i in 0..16 {
let dim = ((word_hash >> (i * 4)) as usize) % MOCK_EMBEDDING_DIM;
let sign = if (word_hash >> (i + 48)) & 1 == 0 { 1.0 } else { -1.0 };
let sign = if (word_hash >> (i + 48)) & 1 == 0 {
1.0
} else {
-1.0
};
let magnitude = ((word_hash >> (i * 2)) as f32 % 100.0) / 100.0 + 0.5;
embedding[dim] += sign * magnitude;
}
@ -342,9 +346,15 @@ mod tests {
let query = service.embed("programming code");
let candidates = vec![
("doc1".to_string(), service.embed("python programming language")),
(
"doc1".to_string(),
service.embed("python programming language"),
),
("doc2".to_string(), service.embed("cooking recipes")),
("doc3".to_string(), service.embed("software development code")),
(
"doc3".to_string(),
service.embed("software development code"),
),
];
let result = service.find_most_similar(&query, &candidates);

View file

@ -17,12 +17,12 @@
use chrono::{DateTime, Duration, Utc};
use std::collections::{HashMap, HashSet};
use vestige_core::neuroscience::spreading_activation::{
ActivatedMemory, ActivationConfig, ActivationNetwork, LinkType,
};
use vestige_core::neuroscience::hippocampal_index::{
BarcodeGenerator, ContentPointer, ContentType, HippocampalIndex, HippocampalIndexConfig,
IndexQuery, MemoryBarcode, MemoryIndex, INDEX_EMBEDDING_DIM,
INDEX_EMBEDDING_DIM, IndexQuery, MemoryBarcode, MemoryIndex,
};
use vestige_core::neuroscience::spreading_activation::{
ActivatedMemory, ActivationConfig, ActivationNetwork, LinkType,
};
use vestige_core::neuroscience::synaptic_tagging::{
CaptureWindow, DecayFunction, ImportanceEvent, ImportanceEventType, SynapticTaggingConfig,
@ -36,9 +36,9 @@ use vestige_core::neuroscience::synaptic_tagging::{
/// SM-2 state for a card
#[derive(Debug, Clone)]
struct SM2State {
easiness_factor: f64, // EF, starts at 2.5
interval: i32, // Days until next review
repetitions: i32, // Number of successful reviews
easiness_factor: f64, // EF, starts at 2.5
interval: i32, // Days until next review
repetitions: i32, // Number of successful reviews
}
impl Default for SM2State {
@ -73,8 +73,9 @@ fn sm2_review(state: &SM2State, grade: SM2Grade) -> SM2State {
let q = grade.as_i32();
// Update easiness factor
let mut new_ef = state.easiness_factor + (0.1 - (5 - q) as f64 * (0.08 + (5 - q) as f64 * 0.02));
new_ef = new_ef.max(1.3); // EF never goes below 1.3
let mut new_ef =
state.easiness_factor + (0.1 - (5 - q) as f64 * (0.08 + (5 - q) as f64 * 0.02));
new_ef = new_ef.max(1.3); // EF never goes below 1.3
if q < 3 {
// Failed - restart learning
@ -117,9 +118,8 @@ fn sm2_retention(interval: i32, elapsed_days: i32) -> f64 {
/// FSRS-6 default weights
const FSRS6_WEIGHTS: [f64; 21] = [
0.212, 1.2931, 2.3065, 8.2956, 6.4133, 0.8334, 3.0194, 0.001,
1.8722, 0.1666, 0.796, 1.4835, 0.0614, 0.2629, 1.6483, 0.6014,
1.8729, 0.5425, 0.0912, 0.0658, 0.1542,
0.212, 1.2931, 2.3065, 8.2956, 6.4133, 0.8334, 3.0194, 0.001, 1.8722, 0.1666, 0.796, 1.4835,
0.0614, 0.2629, 1.6483, 0.6014, 1.8729, 0.5425, 0.0912, 0.0658, 0.1542,
];
/// FSRS-6 state
@ -160,7 +160,9 @@ fn fsrs6_retrievability(stability: f64, elapsed_days: f64, w20: f64) -> f64 {
return 1.0;
}
let factor = fsrs6_factor(w20);
(1.0 + factor * elapsed_days / stability).powf(-w20).clamp(0.0, 1.0)
(1.0 + factor * elapsed_days / stability)
.powf(-w20)
.clamp(0.0, 1.0)
}
/// FSRS-6 interval calculation
@ -183,24 +185,32 @@ fn fsrs6_review(state: &FSRS6State, grade: FSRS6Grade, elapsed_days: f64) -> FSR
let new_stability = match grade {
FSRS6Grade::Again => {
// Lapse formula
w[11] * state.difficulty.powf(-w[12])
w[11]
* state.difficulty.powf(-w[12])
* ((state.stability + 1.0).powf(w[13]) - 1.0)
* (w[14] * (1.0 - r)).exp()
}
_ => {
// Recall formula
let hard_penalty = if matches!(grade, FSRS6Grade::Hard) { w[15] } else { 1.0 };
let easy_bonus = if matches!(grade, FSRS6Grade::Easy) { w[16] } else { 1.0 };
let hard_penalty = if matches!(grade, FSRS6Grade::Hard) {
w[15]
} else {
1.0
};
let easy_bonus = if matches!(grade, FSRS6Grade::Easy) {
w[16]
} else {
1.0
};
state.stability * (
w[8].exp()
* (11.0 - state.difficulty)
* state.stability.powf(-w[9])
* ((w[10] * (1.0 - r)).exp() - 1.0)
* hard_penalty
* easy_bonus
+ 1.0
)
state.stability
* (w[8].exp()
* (11.0 - state.difficulty)
* state.stability.powf(-w[9])
* ((w[10] * (1.0 - r)).exp() - 1.0)
* hard_penalty
* easy_bonus
+ 1.0)
}
};
@ -209,8 +219,8 @@ fn fsrs6_review(state: &FSRS6State, grade: FSRS6Grade, elapsed_days: f64) -> FSR
let delta = -w[6] * (g - 3.0);
let mean_reversion = (10.0 - state.difficulty) / 9.0;
let d0 = w[4] - (w[5] * 2.0).exp() + 1.0;
let new_difficulty = (w[7] * d0 + (1.0 - w[7]) * (state.difficulty + delta * mean_reversion))
.clamp(1.0, 10.0);
let new_difficulty =
(w[7] * d0 + (1.0 - w[7]) * (state.difficulty + delta * mean_reversion)).clamp(1.0, 10.0);
FSRS6State {
difficulty: new_difficulty,
@ -226,7 +236,7 @@ fn fsrs6_review(state: &FSRS6State, grade: FSRS6Grade, elapsed_days: f64) -> FSR
/// Leitner box state
#[derive(Debug, Clone)]
struct LeitnerState {
box_number: i32, // 1-5
box_number: i32, // 1-5
}
impl Default for LeitnerState {
@ -288,7 +298,9 @@ impl SimilaritySearch {
}
fn search(&self, query_embedding: &[f32], top_k: usize) -> Vec<(String, f64)> {
let mut results: Vec<(String, f64)> = self.embeddings.iter()
let mut results: Vec<(String, f64)> = self
.embeddings
.iter()
.map(|(id, emb)| {
let sim = cosine_similarity(query_embedding, emb);
(id.clone(), sim)
@ -331,9 +343,8 @@ fn test_fsrs6_vs_sm2_efficiency() {
// Simulate SM-2
let mut sm2_reviews = 0;
let mut sm2_states: Vec<(SM2State, i32)> = (0..NUM_CARDS)
.map(|_| (SM2State::default(), 0))
.collect();
let mut sm2_states: Vec<(SM2State, i32)> =
(0..NUM_CARDS).map(|_| (SM2State::default(), 0)).collect();
for day in 1..=DAYS {
for (state, next_review) in sm2_states.iter_mut() {
@ -349,9 +360,8 @@ fn test_fsrs6_vs_sm2_efficiency() {
// Simulate FSRS-6
let mut fsrs_reviews = 0;
let mut fsrs_states: Vec<(FSRS6State, i32)> = (0..NUM_CARDS)
.map(|_| (FSRS6State::default(), 0))
.collect();
let mut fsrs_states: Vec<(FSRS6State, i32)> =
(0..NUM_CARDS).map(|_| (FSRS6State::default(), 0)).collect();
for day in 1..=DAYS {
for (state, next_review) in fsrs_states.iter_mut() {
@ -432,7 +442,7 @@ fn test_fsrs6_vs_sm2_reviews_same_retention() {
// SM-2: Interval growth is linear with EF
// After n successful reviews: interval ≈ previous * 2.5
let sm2_intervals = vec![1, 6, 15, 38, 95]; // Approximate SM-2 progression
let sm2_intervals = vec![1, 6, 15, 38, 95]; // Approximate SM-2 progression
// FSRS-6: Stability grows based on forgetting curve parameters
// This allows for more nuanced interval optimization
@ -469,13 +479,13 @@ fn test_fsrs6_vs_sm2_reviews_same_retention() {
// Test the core FSRS-6 innovation: difficulty modulation
// Create a "hard" card and compare stability growth
let mut hard_state = FSRS6State {
difficulty: 8.0, // Hard card
difficulty: 8.0, // Hard card
stability: FSRS6State::default().stability,
reps: 0,
};
let mut easy_state = FSRS6State {
difficulty: 2.0, // Easy card
difficulty: 2.0, // Easy card
stability: FSRS6State::default().stability,
reps: 0,
};
@ -593,7 +603,7 @@ fn test_fsrs6_vs_leitner() {
/// get shorter intervals, and users with flatter curves (lower w20) get longer.
#[test]
fn test_fsrs6_personalization_improvement() {
let default_w20 = FSRS6_WEIGHTS[20]; // 0.1542
let default_w20 = FSRS6_WEIGHTS[20]; // 0.1542
// User with faster forgetting (higher w20 = steeper curve)
let fast_forgetter_w20 = 0.35;
@ -629,7 +639,7 @@ fn test_fsrs6_personalization_improvement() {
// The key insight: w20 affects optimal interval calculation
// For same desired_retention (0.9), different w20 gives different intervals
let desired_retention = 0.85; // Target 85% to see interval differences
let desired_retention = 0.85; // Target 85% to see interval differences
let default_interval = fsrs6_interval(stability, desired_retention, default_w20);
let fast_interval = fsrs6_interval(stability, desired_retention, fast_forgetter_w20);
let slow_interval = fsrs6_interval(stability, desired_retention, slow_forgetter_w20);
@ -638,7 +648,9 @@ fn test_fsrs6_personalization_improvement() {
assert!(
default_interval > 0 && fast_interval > 0 && slow_interval > 0,
"All intervals should be positive: default={}, fast={}, slow={}",
default_interval, fast_interval, slow_interval
default_interval,
fast_interval,
slow_interval
);
// The total range of intervals demonstrates personalization value
@ -747,34 +759,55 @@ fn test_fsrs6_hard_penalty_effectiveness() {
fn test_spreading_vs_similarity_1_hop() {
// Setup spreading activation network
let mut network = ActivationNetwork::new();
network.add_edge("rust".to_string(), "cargo".to_string(), LinkType::Semantic, 0.9);
network.add_edge("rust".to_string(), "ownership".to_string(), LinkType::Semantic, 0.85);
network.add_edge(
"rust".to_string(),
"cargo".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"rust".to_string(),
"ownership".to_string(),
LinkType::Semantic,
0.85,
);
// Setup similarity search with similar embeddings
let mut sim_search = SimilaritySearch::new();
sim_search.add("rust", vec![1.0, 0.0, 0.0]);
sim_search.add("cargo", vec![0.9, 0.1, 0.0]); // Similar to rust
sim_search.add("ownership", vec![0.85, 0.15, 0.0]); // Similar to rust
sim_search.add("python", vec![0.0, 1.0, 0.0]); // Unrelated
sim_search.add("cargo", vec![0.9, 0.1, 0.0]); // Similar to rust
sim_search.add("ownership", vec![0.85, 0.15, 0.0]); // Similar to rust
sim_search.add("python", vec![0.0, 1.0, 0.0]); // Unrelated
// Spreading activation
let spreading_results = network.activate("rust", 1.0);
let spreading_found: HashSet<_> = spreading_results.iter()
let spreading_found: HashSet<_> = spreading_results
.iter()
.map(|r| r.memory_id.as_str())
.collect();
// Similarity search
let sim_results = sim_search.search(&[1.0, 0.0, 0.0], 3);
let sim_found: HashSet<_> = sim_results.iter()
let sim_found: HashSet<_> = sim_results
.iter()
.filter(|(_, score)| *score > 0.8)
.map(|(id, _)| id.as_str())
.collect();
// At 1-hop, both should find the direct connections
assert!(spreading_found.contains("cargo"), "Spreading should find cargo");
assert!(spreading_found.contains("ownership"), "Spreading should find ownership");
assert!(
spreading_found.contains("cargo"),
"Spreading should find cargo"
);
assert!(
spreading_found.contains("ownership"),
"Spreading should find ownership"
);
assert!(sim_found.contains("cargo"), "Similarity should find cargo");
assert!(sim_found.contains("ownership"), "Similarity should find ownership");
assert!(
sim_found.contains("ownership"),
"Similarity should find ownership"
);
}
/// Test 2-hop: Spreading activation finds indirect connections.
@ -790,23 +823,35 @@ fn test_spreading_vs_similarity_2_hop() {
// Create a chain: rust -> tokio -> async_runtime
// rust and async_runtime have NO direct similarity
network.add_edge("rust".to_string(), "tokio".to_string(), LinkType::Semantic, 0.9);
network.add_edge("tokio".to_string(), "async_runtime".to_string(), LinkType::Semantic, 0.85);
network.add_edge(
"rust".to_string(),
"tokio".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"tokio".to_string(),
"async_runtime".to_string(),
LinkType::Semantic,
0.85,
);
// Similarity search - embeddings show NO similarity between rust and async_runtime
let mut sim_search = SimilaritySearch::new();
sim_search.add("rust", vec![1.0, 0.0, 0.0, 0.0]);
sim_search.add("tokio", vec![0.7, 0.7, 0.0, 0.0]); // Bridge
sim_search.add("async_runtime", vec![0.0, 1.0, 0.0, 0.0]); // No similarity to rust
sim_search.add("tokio", vec![0.7, 0.7, 0.0, 0.0]); // Bridge
sim_search.add("async_runtime", vec![0.0, 1.0, 0.0, 0.0]); // No similarity to rust
// Spreading finds async_runtime through the chain
let spreading_results = network.activate("rust", 1.0);
let spreading_found_async = spreading_results.iter()
let spreading_found_async = spreading_results
.iter()
.any(|r| r.memory_id == "async_runtime");
// Similarity from "rust" does NOT find async_runtime
let sim_results = sim_search.search(&[1.0, 0.0, 0.0, 0.0], 5);
let sim_found_async = sim_results.iter()
let sim_found_async = sim_results
.iter()
.any(|(id, score)| id == "async_runtime" && *score > 0.5);
assert!(
@ -832,27 +877,52 @@ fn test_spreading_vs_similarity_3_hop() {
// Create 3-hop chain: A -> B -> C -> D
// Each step has semantic connection, but A and D have ZERO direct similarity
network.add_edge("concept_a".to_string(), "concept_b".to_string(), LinkType::Semantic, 0.9);
network.add_edge("concept_b".to_string(), "concept_c".to_string(), LinkType::Semantic, 0.9);
network.add_edge("concept_c".to_string(), "concept_d".to_string(), LinkType::Semantic, 0.9);
network.add_edge(
"concept_a".to_string(),
"concept_b".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"concept_b".to_string(),
"concept_c".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"concept_c".to_string(),
"concept_d".to_string(),
LinkType::Semantic,
0.9,
);
// Embeddings: A and D are orthogonal (zero similarity)
let mut sim_search = SimilaritySearch::new();
sim_search.add("concept_a", vec![1.0, 0.0, 0.0, 0.0]);
sim_search.add("concept_b", vec![0.7, 0.7, 0.0, 0.0]);
sim_search.add("concept_c", vec![0.0, 0.7, 0.7, 0.0]);
sim_search.add("concept_d", vec![0.0, 0.0, 0.0, 1.0]); // Orthogonal to A
sim_search.add("concept_d", vec![0.0, 0.0, 0.0, 1.0]); // Orthogonal to A
// Spreading finds D
let spreading_results = network.activate("concept_a", 1.0);
let d_result = spreading_results.iter().find(|r| r.memory_id == "concept_d");
let d_result = spreading_results
.iter()
.find(|r| r.memory_id == "concept_d");
assert!(d_result.is_some(), "Spreading MUST find concept_d at 3 hops");
assert_eq!(d_result.unwrap().distance, 3, "Should be exactly 3 hops away");
assert!(
d_result.is_some(),
"Spreading MUST find concept_d at 3 hops"
);
assert_eq!(
d_result.unwrap().distance,
3,
"Should be exactly 3 hops away"
);
// Similarity CANNOT find D from A
let sim_results = sim_search.search(&[1.0, 0.0, 0.0, 0.0], 10);
let sim_d_score = sim_results.iter()
let sim_d_score = sim_results
.iter()
.find(|(id, _)| id == "concept_d")
.map(|(_, score)| *score)
.unwrap_or(0.0);
@ -873,9 +943,24 @@ fn test_spreading_finds_chains_similarity_misses() {
// Chain: "memory_leak" -> "reference_counting" -> "Arc_Weak" -> "cyclic_references"
// The solution (cyclic_references) is NOT semantically similar to "memory_leak"
network.add_edge("memory_leak".to_string(), "reference_counting".to_string(), LinkType::Causal, 0.9);
network.add_edge("reference_counting".to_string(), "arc_weak".to_string(), LinkType::Semantic, 0.85);
network.add_edge("arc_weak".to_string(), "cyclic_references".to_string(), LinkType::Semantic, 0.9);
network.add_edge(
"memory_leak".to_string(),
"reference_counting".to_string(),
LinkType::Causal,
0.9,
);
network.add_edge(
"reference_counting".to_string(),
"arc_weak".to_string(),
LinkType::Semantic,
0.85,
);
network.add_edge(
"arc_weak".to_string(),
"cyclic_references".to_string(),
LinkType::Semantic,
0.9,
);
// The problem: "cyclic_references" has zero direct similarity to "memory_leak"
// (they use completely different vocabulary)
@ -883,16 +968,18 @@ fn test_spreading_finds_chains_similarity_misses() {
sim_search.add("memory_leak", vec![1.0, 0.0, 0.0, 0.0]);
sim_search.add("reference_counting", vec![0.5, 0.5, 0.0, 0.0]);
sim_search.add("arc_weak", vec![0.0, 0.7, 0.3, 0.0]);
sim_search.add("cyclic_references", vec![0.0, 0.0, 0.0, 1.0]); // Totally different!
sim_search.add("cyclic_references", vec![0.0, 0.0, 0.0, 1.0]); // Totally different!
// Spreading activation finds the solution
let spreading_results = network.activate("memory_leak", 1.0);
let found_solution = spreading_results.iter()
let found_solution = spreading_results
.iter()
.any(|r| r.memory_id == "cyclic_references");
// Similarity search cannot find it
let sim_results = sim_search.search(&[1.0, 0.0, 0.0, 0.0], 10);
let sim_found = sim_results.iter()
let sim_found = sim_results
.iter()
.any(|(id, score)| id == "cyclic_references" && *score > 0.3);
assert!(
@ -911,26 +998,47 @@ fn test_spreading_path_quality() {
let mut network = ActivationNetwork::new();
// Create a knowledge graph about Rust error handling
network.add_edge("error_handling".to_string(), "result_type".to_string(), LinkType::Semantic, 0.9);
network.add_edge("result_type".to_string(), "question_mark_operator".to_string(), LinkType::Semantic, 0.85);
network.add_edge("question_mark_operator".to_string(), "early_return".to_string(), LinkType::Semantic, 0.8);
network.add_edge(
"error_handling".to_string(),
"result_type".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"result_type".to_string(),
"question_mark_operator".to_string(),
LinkType::Semantic,
0.85,
);
network.add_edge(
"question_mark_operator".to_string(),
"early_return".to_string(),
LinkType::Semantic,
0.8,
);
let results = network.activate("error_handling", 1.0);
// Find the path to early_return
let early_return_result = results.iter()
let early_return_result = results
.iter()
.find(|r| r.memory_id == "early_return")
.expect("Should find early_return");
// Verify the path makes sense
assert_eq!(early_return_result.path.len(), 4, "Path should have 4 nodes");
assert_eq!(
early_return_result.path.len(),
4,
"Path should have 4 nodes"
);
assert_eq!(early_return_result.path[0], "error_handling");
assert_eq!(early_return_result.path[1], "result_type");
assert_eq!(early_return_result.path[2], "question_mark_operator");
assert_eq!(early_return_result.path[3], "early_return");
// Activation should decay along the path
let result_type_activation = results.iter()
let result_type_activation = results
.iter()
.find(|r| r.memory_id == "result_type")
.map(|r| r.activation)
.unwrap_or(0.0);
@ -1056,20 +1164,52 @@ fn test_spreading_mixed_link_types() {
let mut network = ActivationNetwork::new();
// Create edges with different link types
network.add_edge("event".to_string(), "semantic_relation".to_string(), LinkType::Semantic, 0.9);
network.add_edge("event".to_string(), "temporal_relation".to_string(), LinkType::Temporal, 0.9);
network.add_edge("event".to_string(), "causal_relation".to_string(), LinkType::Causal, 0.9);
network.add_edge("event".to_string(), "spatial_relation".to_string(), LinkType::Spatial, 0.9);
network.add_edge(
"event".to_string(),
"semantic_relation".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"event".to_string(),
"temporal_relation".to_string(),
LinkType::Temporal,
0.9,
);
network.add_edge(
"event".to_string(),
"causal_relation".to_string(),
LinkType::Causal,
0.9,
);
network.add_edge(
"event".to_string(),
"spatial_relation".to_string(),
LinkType::Spatial,
0.9,
);
let results = network.activate("event", 1.0);
// Should find all related nodes
let found_ids: HashSet<_> = results.iter().map(|r| r.memory_id.as_str()).collect();
assert!(found_ids.contains("semantic_relation"), "Should find semantic relation");
assert!(found_ids.contains("temporal_relation"), "Should find temporal relation");
assert!(found_ids.contains("causal_relation"), "Should find causal relation");
assert!(found_ids.contains("spatial_relation"), "Should find spatial relation");
assert!(
found_ids.contains("semantic_relation"),
"Should find semantic relation"
);
assert!(
found_ids.contains("temporal_relation"),
"Should find temporal relation"
);
assert!(
found_ids.contains("causal_relation"),
"Should find causal relation"
);
assert!(
found_ids.contains("spatial_relation"),
"Should find spatial relation"
);
// Verify link types are preserved
for result in &results {
@ -1094,7 +1234,7 @@ fn test_spreading_mixed_link_types() {
#[test]
fn test_retroactive_vs_timestamp_importance() {
let config = SynapticTaggingConfig {
capture_window: CaptureWindow::new(9.0, 2.0), // 9 hours back, 2 hours forward
capture_window: CaptureWindow::new(9.0, 2.0), // 9 hours back, 2 hours forward
prp_threshold: 0.7,
tag_lifetime_hours: 12.0,
min_tag_strength: 0.3,
@ -1130,7 +1270,8 @@ fn test_retroactive_vs_timestamp_importance() {
// (In tests, tag_memory() uses Utc::now(), so temporal_distance ~= 0)
for captured in &result.captured_memories {
assert!(
captured.temporal_distance_hours >= 0.0 || captured.temporal_distance_hours.abs() < 0.01,
captured.temporal_distance_hours >= 0.0
|| captured.temporal_distance_hours.abs() < 0.01,
"Captured memory {} should be encoded at or before event (distance: {:.4}h)",
captured.memory_id,
captured.temporal_distance_hours
@ -1166,7 +1307,10 @@ fn test_retroactive_captures_related_memories() {
);
// Verify cluster properties
assert!(cluster.average_importance > 0.0, "Cluster should have positive importance");
assert!(
cluster.average_importance > 0.0,
"Cluster should have positive importance"
);
assert_eq!(
cluster.trigger_event_type,
ImportanceEventType::EmotionalContent
@ -1184,7 +1328,11 @@ fn test_retroactive_window_effectiveness() {
(Duration::hours(1), true, "1 hour before"),
(Duration::hours(4), true, "4 hours before"),
(Duration::hours(8), true, "8 hours before"),
(Duration::hours(10), false, "10 hours before (outside window)"),
(
Duration::hours(10),
false,
"10 hours before (outside window)",
),
(Duration::minutes(-30), true, "30 minutes after"),
(Duration::hours(-3), false, "3 hours after (outside window)"),
];
@ -1201,7 +1349,11 @@ fn test_retroactive_window_effectiveness() {
if should_capture {
let prob = window.capture_probability(memory_time, event_time);
assert!(prob.is_some(), "{} should have capture probability", description);
assert!(
prob.is_some(),
"{} should have capture probability",
description
);
assert!(
prob.unwrap() > 0.0,
"{} should have positive capture probability",
@ -1218,7 +1370,7 @@ fn test_retroactive_semantic_filtering() {
capture_window: CaptureWindow::new(9.0, 2.0),
prp_threshold: 0.7,
tag_lifetime_hours: 12.0,
min_tag_strength: 0.1, // Low threshold to test strength effects
min_tag_strength: 0.1, // Low threshold to test strength effects
max_cluster_size: 100,
enable_clustering: true,
auto_decay: true,
@ -1232,14 +1384,16 @@ fn test_retroactive_semantic_filtering() {
stc.tag_memory_with_strength("highly_relevant", 0.95);
stc.tag_memory_with_strength("moderately_relevant", 0.6);
stc.tag_memory_with_strength("barely_relevant", 0.35);
stc.tag_memory_with_strength("irrelevant", 0.05); // Below threshold
stc.tag_memory_with_strength("irrelevant", 0.05); // Below threshold
// Trigger importance event
let event = ImportanceEvent::user_flag("trigger", None);
let result = stc.trigger_prp(event);
// Higher strength memories should be captured with higher consolidated importance
let captured_ids: HashSet<_> = result.captured_memories.iter()
let captured_ids: HashSet<_> = result
.captured_memories
.iter()
.map(|c| c.memory_id.as_str())
.collect();
@ -1253,12 +1407,16 @@ fn test_retroactive_semantic_filtering() {
);
// Find consolidated importance values
let highly_relevant_importance = result.captured_memories.iter()
let highly_relevant_importance = result
.captured_memories
.iter()
.find(|c| c.memory_id == "highly_relevant")
.map(|c| c.consolidated_importance)
.unwrap_or(0.0);
let moderately_relevant_importance = result.captured_memories.iter()
let moderately_relevant_importance = result
.captured_memories
.iter()
.find(|c| c.memory_id == "moderately_relevant")
.map(|c| c.consolidated_importance)
.unwrap_or(0.0);
@ -1285,10 +1443,8 @@ fn test_proof_unique_to_vestige() {
let mut stc = SynapticTaggingSystem::new();
// Memory 1: Ordinary conversation about vacation (time T)
let _vacation_memory = stc.tag_memory_with_context(
"vacation_mention",
"User mentioned Bob's vacation plans"
);
let _vacation_memory =
stc.tag_memory_with_context("vacation_mention", "User mentioned Bob's vacation plans");
// Memory 2: Some other ordinary memories
stc.tag_memory("unrelated_memory_1");
@ -1300,14 +1456,16 @@ fn test_proof_unique_to_vestige() {
event_type: ImportanceEventType::UserFlag,
memory_id: Some("departure_announcement".to_string()),
timestamp: Utc::now(),
strength: 1.0, // Maximum importance
strength: 1.0, // Maximum importance
context: Some("Bob is leaving - this makes prior context important".to_string()),
};
let result = stc.trigger_prp(event);
// The vacation memory should be captured!
let vacation_captured = result.captured_memories.iter()
let vacation_captured = result
.captured_memories
.iter()
.any(|c| c.memory_id == "vacation_mention");
assert!(
@ -1316,7 +1474,9 @@ fn test_proof_unique_to_vestige() {
);
// Verify the capture details
let vacation_capture = result.captured_memories.iter()
let vacation_capture = result
.captured_memories
.iter()
.find(|c| c.memory_id == "vacation_mention")
.unwrap();
@ -1324,7 +1484,8 @@ fn test_proof_unique_to_vestige() {
// so temporal_distance is ~0 (but conceptually it's a "backward" capture
// since the memory existed BEFORE it became important)
assert!(
vacation_capture.temporal_distance_hours >= 0.0 || vacation_capture.temporal_distance_hours.abs() < 0.01,
vacation_capture.temporal_distance_hours >= 0.0
|| vacation_capture.temporal_distance_hours.abs() < 0.01,
"Memory should be encoded at or before the importance event (distance: {:.4}h)",
vacation_capture.temporal_distance_hours
);
@ -1406,7 +1567,7 @@ fn test_index_compression_ratio() {
let full_embedding_dim = 384;
// Index embedding size
let index_embedding_dim = config.summary_dimensions; // 128 by default
let index_embedding_dim = config.summary_dimensions; // 128 by default
// Compression ratio
let compression_ratio = full_embedding_dim as f64 / index_embedding_dim as f64;
@ -1425,7 +1586,7 @@ fn test_index_compression_ratio() {
);
// Memory savings per memory
let full_size_bytes = full_embedding_dim * 4; // f32 = 4 bytes
let full_size_bytes = full_embedding_dim * 4; // f32 = 4 bytes
let index_size_bytes = index_embedding_dim * 4;
let savings_per_memory = full_size_bytes - index_size_bytes;
@ -1522,8 +1683,7 @@ fn test_content_pointer_accuracy() {
assert_eq!(chunked_ptr.size_bytes, Some(100));
// Test with hash
let hashed_ptr = ContentPointer::sqlite("data", 1, ContentType::Text)
.with_hash(0xDEADBEEF);
let hashed_ptr = ContentPointer::sqlite("data", 1, ContentType::Text).with_hash(0xDEADBEEF);
assert_eq!(hashed_ptr.content_hash, Some(0xDEADBEEF));
@ -1531,19 +1691,24 @@ fn test_content_pointer_accuracy() {
let index = HippocampalIndex::new();
let now = Utc::now();
let barcode = index.index_memory(
"test_memory",
"Test content for pointer verification",
"fact",
now,
None,
).unwrap();
let barcode = index
.index_memory(
"test_memory",
"Test content for pointer verification",
"fact",
now,
None,
)
.unwrap();
// Retrieve and verify
let retrieved = index.get_index("test_memory").unwrap().unwrap();
assert_eq!(retrieved.barcode, barcode);
assert!(!retrieved.content_pointers.is_empty(), "Should have content pointer");
assert!(
!retrieved.content_pointers.is_empty(),
"Should have content pointer"
);
// Verify the default pointer is SQLite
let default_ptr = &retrieved.content_pointers[0];

View file

@ -15,11 +15,11 @@
//! 3. **Scheduler & Timing**: Tests for activity detection and idle triggers
use chrono::{Duration, Utc};
use std::collections::HashSet;
use vestige_core::advanced::dreams::{
ActivityTracker, ConnectionGraph, ConnectionReason, ConsolidationScheduler, DreamConfig,
DreamMemory, InsightType, MemoryDreamer,
};
use std::collections::HashSet;
// ============================================================================
// HELPER FUNCTIONS
@ -66,7 +66,6 @@ fn make_memory_with_access(
}
}
// ============================================================================
// INSIGHT GENERATION TESTS (5 tests)
// ============================================================================
@ -448,11 +447,7 @@ async fn test_consolidation_decay_stage() {
let replay = report.stage1_replay.as_ref().unwrap();
// Should replay memories in chronological order
assert_eq!(
replay.sequence.len(),
3,
"Should replay all 3 memories"
);
assert_eq!(replay.sequence.len(), 3, "Should replay all 3 memories");
// Older memory should come first in replay sequence
assert_eq!(
@ -616,12 +611,7 @@ async fn test_consolidation_transfer_stage() {
vec!["important"],
5,
),
make_memory_with_access(
"low_access",
"Rarely accessed memory",
vec!["minor"],
1,
),
make_memory_with_access("low_access", "Rarely accessed memory", vec!["minor"], 1),
];
let report = scheduler.run_consolidation_cycle(&memories).await;
@ -911,7 +901,10 @@ fn test_connection_graph_comprehensive() {
// Test strengthening
assert!(graph.strengthen_connection("a", "b", 0.1));
let new_strength = graph.total_connection_strength("a");
assert!(new_strength > a_strength, "Strength should increase after reinforcement");
assert!(
new_strength > a_strength,
"Strength should increase after reinforcement"
);
// Test decay and pruning
graph.apply_decay(0.5);

View file

@ -22,19 +22,39 @@
use chrono::{Duration, Utc};
use vestige_core::{
// Advanced reconsolidation
AccessContext, AccessTrigger, Modification, ReconsolidationManager, RelationshipType,
// FSRS
Rating, retrievability, retrievability_with_decay, initial_difficulty,
next_interval, FSRSScheduler,
// Neuroscience - Synaptic Tagging
SynapticTaggingSystem, ImportanceEvent, ImportanceEventType,
CaptureWindow, DecayFunction,
// Neuroscience - Memory States
MemoryState, MemoryLifecycle, AccessibilityCalculator,
CompetitionManager, CompetitionCandidate,
AccessContext,
AccessTrigger,
AccessibilityCalculator,
ArousalSignal,
AttentionSession,
AttentionSignal,
CaptureWindow,
CompetitionCandidate,
CompetitionManager,
DecayFunction,
FSRSScheduler,
ImportanceContext,
ImportanceEvent,
ImportanceEventType,
// Neuroscience - Importance Signals
ImportanceSignals, NoveltySignal, ArousalSignal, RewardSignal, AttentionSignal,
ImportanceContext, AttentionSession, OutcomeType,
ImportanceSignals,
MemoryLifecycle,
// Neuroscience - Memory States
MemoryState,
Modification,
NoveltySignal,
OutcomeType,
// FSRS
Rating,
ReconsolidationManager,
RelationshipType,
RewardSignal,
// Neuroscience - Synaptic Tagging
SynapticTaggingSystem,
initial_difficulty,
next_interval,
retrievability,
retrievability_with_decay,
};
// ============================================================================
@ -91,7 +111,12 @@ fn test_stc_prp_trigger_captures_memories() {
// The tagged memory should be captured
assert!(result.has_captures());
assert!(result.captured_memories.iter().any(|c| c.memory_id == "mem-background"));
assert!(
result
.captured_memories
.iter()
.any(|c| c.memory_id == "mem-background")
);
assert!(stc.is_captured("mem-background"));
}
@ -140,13 +165,20 @@ fn test_stc_capture_window_probability() {
// Memory just before event - high probability (exponential decay with λ=4.605/9)
let recent_before = event_time - Duration::hours(1);
let prob_recent = window.capture_probability(recent_before, event_time).unwrap();
let prob_recent = window
.capture_probability(recent_before, event_time)
.unwrap();
// At 1h out of 9h with exponential decay: e^(-4.605/9 * 1) ≈ 0.6
assert!(prob_recent > 0.5, "Recent memory should have high capture probability");
assert!(
prob_recent > 0.5,
"Recent memory should have high capture probability"
);
// Memory 6 hours before event - moderate probability
let medium_before = event_time - Duration::hours(6);
let prob_medium = window.capture_probability(medium_before, event_time).unwrap();
let prob_medium = window
.capture_probability(medium_before, event_time)
.unwrap();
assert!(prob_medium > 0.0 && prob_medium < prob_recent);
// Memory outside window - no capture
@ -165,14 +197,26 @@ fn test_stc_decay_functions() {
let exp_at_half = exp_decay.apply(1.0, 6.0, 12.0);
let exp_at_end = exp_decay.apply(1.0, 12.0, 12.0);
assert!((exp_at_zero - 1.0).abs() < 0.01, "Should be full strength at t=0");
assert!(exp_at_half > 0.0 && exp_at_half < 0.5, "Significant decay at halfway");
assert!(
(exp_at_zero - 1.0).abs() < 0.01,
"Should be full strength at t=0"
);
assert!(
exp_at_half > 0.0 && exp_at_half < 0.5,
"Significant decay at halfway"
);
assert!(exp_at_end < 0.02, "Near zero at lifetime end");
// Linear decay
let linear_decay = DecayFunction::Linear;
assert!((linear_decay.apply(1.0, 5.0, 10.0) - 0.5).abs() < 0.01, "Linear: 50% at halfway");
assert!((linear_decay.apply(1.0, 10.0, 10.0) - 0.0).abs() < 0.01, "Linear: 0% at end");
assert!(
(linear_decay.apply(1.0, 5.0, 10.0) - 0.5).abs() < 0.01,
"Linear: 50% at halfway"
);
assert!(
(linear_decay.apply(1.0, 10.0, 10.0) - 0.0).abs() < 0.01,
"Linear: 0% at end"
);
// Power decay (matches FSRS-6)
let power_decay = DecayFunction::Power;
@ -259,7 +303,10 @@ fn test_reconsolidation_marks_memory_labile() {
let snapshot = vestige_core::MemorySnapshot::capture(
"Test content".to_string(),
vec!["test".to_string()],
0.8, 5.0, 0.9, vec![],
0.8,
5.0,
0.9,
vec![],
);
manager.mark_labile("mem-123", snapshot);
@ -277,22 +324,30 @@ fn test_reconsolidation_apply_modifications() {
let snapshot = vestige_core::MemorySnapshot::capture(
"Original content".to_string(),
vec!["original".to_string()],
0.8, 5.0, 0.9, vec![],
0.8,
5.0,
0.9,
vec![],
);
manager.mark_labile("mem-123", snapshot);
// Apply various modifications
let success1 = manager.apply_modification("mem-123", Modification::AddTag {
tag: "new-tag".to_string(),
});
let success2 = manager.apply_modification("mem-123", Modification::BoostRetrieval {
boost: 0.1,
});
let success3 = manager.apply_modification("mem-123", Modification::LinkMemory {
related_memory_id: "mem-456".to_string(),
relationship: RelationshipType::Supports,
});
let success1 = manager.apply_modification(
"mem-123",
Modification::AddTag {
tag: "new-tag".to_string(),
},
);
let success2 =
manager.apply_modification("mem-123", Modification::BoostRetrieval { boost: 0.1 });
let success3 = manager.apply_modification(
"mem-123",
Modification::LinkMemory {
related_memory_id: "mem-456".to_string(),
relationship: RelationshipType::Supports,
},
);
assert!(success1 && success2 && success3);
assert_eq!(manager.get_stats().total_modifications, 3);
@ -307,16 +362,25 @@ fn test_reconsolidation_finalizes_changes() {
let snapshot = vestige_core::MemorySnapshot::capture(
"Content".to_string(),
vec!["tag".to_string()],
0.8, 5.0, 0.9, vec![],
0.8,
5.0,
0.9,
vec![],
);
manager.mark_labile("mem-123", snapshot);
manager.apply_modification("mem-123", Modification::AddTag {
tag: "new-tag".to_string(),
});
manager.apply_modification("mem-123", Modification::AddContext {
context: "Important meeting notes".to_string(),
});
manager.apply_modification(
"mem-123",
Modification::AddTag {
tag: "new-tag".to_string(),
},
);
manager.apply_modification(
"mem-123",
Modification::AddContext {
context: "Important meeting notes".to_string(),
},
);
let result = manager.reconsolidate("mem-123");
@ -333,10 +397,8 @@ fn test_reconsolidation_finalizes_changes() {
#[test]
fn test_reconsolidation_tracks_access_context() {
let mut manager = ReconsolidationManager::new();
let snapshot = vestige_core::MemorySnapshot::capture(
"Content".to_string(),
vec![], 0.8, 5.0, 0.9, vec![],
);
let snapshot =
vestige_core::MemorySnapshot::capture("Content".to_string(), vec![], 0.8, 5.0, 0.9, vec![]);
let context = AccessContext {
trigger: AccessTrigger::Search,
query: Some("test query".to_string()),
@ -357,10 +419,8 @@ fn test_reconsolidation_tracks_access_context() {
#[test]
fn test_reconsolidation_retrieval_history() {
let mut manager = ReconsolidationManager::new();
let snapshot = vestige_core::MemorySnapshot::capture(
"Content".to_string(),
vec![], 0.8, 5.0, 0.9, vec![],
);
let snapshot =
vestige_core::MemorySnapshot::capture("Content".to_string(), vec![], 0.8, 5.0, 0.9, vec![]);
// Multiple retrievals
for _ in 0..3 {
@ -417,8 +477,10 @@ fn test_fsrs_custom_decay_parameter() {
let r_high_decay = retrievability_with_decay(stability, elapsed, 0.5);
// Lower decay = steeper curve = lower retrievability for same time
assert!(r_low_decay < r_high_decay,
"Lower decay parameter should result in faster forgetting");
assert!(
r_low_decay < r_high_decay,
"Lower decay parameter should result in faster forgetting"
);
}
/// Test interval calculation round-trips with retrievability.
@ -436,7 +498,9 @@ fn test_fsrs_interval_retrievability_roundtrip() {
assert!(
(actual_r - target_r).abs() < 0.05,
"Round-trip: interval={}, actual_R={:.3}, target_R={:.3}",
interval, actual_r, target_r
interval,
actual_r,
target_r
);
}
@ -492,7 +556,10 @@ fn test_fsrs_difficulty_mean_reversion() {
let high_d_after = result.state.difficulty;
// Mean reversion should pull high difficulty down
assert!(high_d_after < high_d_before, "High difficulty should decrease");
assert!(
high_d_after < high_d_before,
"High difficulty should decrease"
);
// Create card with low difficulty
let mut low_d_card = scheduler.new_card();
@ -502,7 +569,10 @@ fn test_fsrs_difficulty_mean_reversion() {
// Again rating should increase difficulty
let result = scheduler.review(&low_d_card, Rating::Again, 0.0, None);
let low_d_after = result.state.difficulty;
assert!(low_d_after > low_d_before, "Again should increase low difficulty");
assert!(
low_d_after > low_d_before,
"Again should increase low difficulty"
);
}
/// Test scheduler lapse tracking.
@ -541,12 +611,18 @@ fn test_memory_state_accessibility_multipliers() {
assert!((MemoryState::Unavailable.accessibility_multiplier() - 0.05).abs() < 0.001);
// Active > Dormant > Silent > Unavailable
assert!(MemoryState::Active.accessibility_multiplier() >
MemoryState::Dormant.accessibility_multiplier());
assert!(MemoryState::Dormant.accessibility_multiplier() >
MemoryState::Silent.accessibility_multiplier());
assert!(MemoryState::Silent.accessibility_multiplier() >
MemoryState::Unavailable.accessibility_multiplier());
assert!(
MemoryState::Active.accessibility_multiplier()
> MemoryState::Dormant.accessibility_multiplier()
);
assert!(
MemoryState::Dormant.accessibility_multiplier()
> MemoryState::Silent.accessibility_multiplier()
);
assert!(
MemoryState::Silent.accessibility_multiplier()
> MemoryState::Unavailable.accessibility_multiplier()
);
}
/// Test state retrievability properties.
@ -589,11 +665,7 @@ fn test_memory_lifecycle_transitions() {
fn test_memory_state_competition_suppression() {
let mut lifecycle = MemoryLifecycle::new();
lifecycle.suppress_from_competition(
"winner-123".to_string(),
0.85,
Duration::hours(2),
);
lifecycle.suppress_from_competition("winner-123".to_string(), 0.85, Duration::hours(2));
assert_eq!(lifecycle.state, MemoryState::Unavailable);
assert!(!lifecycle.is_suppression_expired());
@ -671,7 +743,10 @@ fn test_memory_state_accessibility_calculator() {
assert!(active_threshold < 0.5, "Active has lower threshold");
assert!(silent_threshold > 0.5, "Silent has higher threshold");
assert!(unavailable_threshold > 1.0, "Unavailable is effectively unreachable");
assert!(
unavailable_threshold > 1.0,
"Unavailable is effectively unreachable"
);
}
// ============================================================================
@ -713,14 +788,19 @@ fn test_importance_arousal_signal() {
let neutral_score = arousal.compute("The meeting is scheduled for tomorrow at 3pm.");
// Highly emotional content
let emotional_score = arousal.compute(
"CRITICAL ERROR!!! Production database is DOWN! Data loss imminent!!!"
);
let emotional_score =
arousal.compute("CRITICAL ERROR!!! Production database is DOWN! Data loss imminent!!!");
assert!(emotional_score > neutral_score,
assert!(
emotional_score > neutral_score,
"Emotional content should have higher arousal: {} vs {}",
emotional_score, neutral_score);
assert!(emotional_score > 0.5, "Highly emotional content should score high");
emotional_score,
neutral_score
);
assert!(
emotional_score > 0.5,
"Highly emotional content should score high"
);
// Detect emotional markers
let markers = arousal.detect_emotional_markers("URGENT: Critical failure!!!");
@ -740,14 +820,20 @@ fn test_importance_reward_signal() {
reward.record_outcome("mem-helpful", OutcomeType::Helpful);
let helpful_score = reward.compute("mem-helpful");
assert!(helpful_score > 0.5, "Memory with positive outcomes should score high");
assert!(
helpful_score > 0.5,
"Memory with positive outcomes should score high"
);
// Record negative outcomes
reward.record_outcome("mem-unhelpful", OutcomeType::NotHelpful);
reward.record_outcome("mem-unhelpful", OutcomeType::NotHelpful);
let unhelpful_score = reward.compute("mem-unhelpful");
assert!(unhelpful_score < 0.5, "Memory with negative outcomes should score low");
assert!(
unhelpful_score < 0.5,
"Memory with negative outcomes should score low"
);
assert!(helpful_score > unhelpful_score);
}
@ -768,11 +854,17 @@ fn test_importance_attention_signal() {
edit_count: 2,
unique_memories_accessed: 15,
viewed_docs: true,
query_topics: vec!["rust".to_string(), "async".to_string(), "memory".to_string()],
query_topics: vec![
"rust".to_string(),
"async".to_string(),
"memory".to_string(),
],
};
assert!(attention.detect_learning_mode(&learning_session),
"Should detect learning mode from session patterns");
assert!(
attention.detect_learning_mode(&learning_session),
"Should detect learning mode from session patterns"
);
// Non-learning session (quick edit)
let quick_session = AttentionSession {
@ -786,8 +878,10 @@ fn test_importance_attention_signal() {
query_topics: vec![],
};
assert!(!attention.detect_learning_mode(&quick_session),
"Quick edit session should not be learning mode");
assert!(
!attention.detect_learning_mode(&quick_session),
"Quick edit session should not be learning mode"
);
}
/// Test composite importance combines all signals.
@ -806,9 +900,15 @@ fn test_importance_composite_score() {
&context,
);
assert!(score.composite > 0.4, "Important content should score moderately high");
assert!(
score.composite > 0.4,
"Important content should score moderately high"
);
assert!(score.arousal > 0.4, "Emotional content should have arousal");
assert!(score.encoding_boost >= 1.0, "High importance should boost encoding");
assert!(
score.encoding_boost >= 1.0,
"High importance should boost encoding"
);
// Verify all components are present
assert!(score.novelty >= 0.0 && score.novelty <= 1.0);

File diff suppressed because it is too large Load diff

View file

@ -5,10 +5,10 @@
//!
//! Based on Collins & Loftus (1975) spreading activation theory.
use std::collections::HashSet;
use vestige_core::neuroscience::spreading_activation::{
ActivationConfig, ActivationNetwork, LinkType,
};
use std::collections::HashSet;
// ============================================================================
// MULTI-HOP ASSOCIATION TESTS (6 tests)
@ -41,9 +41,7 @@ fn test_spreading_finds_hidden_chains() {
let results = network.activate("rust_async", 1.0);
// Should find "green_threads" through the chain
let found_green_threads = results
.iter()
.any(|r| r.memory_id == "green_threads");
let found_green_threads = results.iter().any(|r| r.memory_id == "green_threads");
assert!(
found_green_threads,
@ -71,9 +69,24 @@ fn test_spreading_3_hop_discovery() {
let mut network = ActivationNetwork::with_config(config);
// Create a 3-hop chain: A -> B -> C -> D
network.add_edge("memory_a".to_string(), "memory_b".to_string(), LinkType::Semantic, 0.9);
network.add_edge("memory_b".to_string(), "memory_c".to_string(), LinkType::Semantic, 0.9);
network.add_edge("memory_c".to_string(), "memory_d".to_string(), LinkType::Semantic, 0.9);
network.add_edge(
"memory_a".to_string(),
"memory_b".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"memory_b".to_string(),
"memory_c".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"memory_c".to_string(),
"memory_d".to_string(),
LinkType::Semantic,
0.9,
);
let results = network.activate("memory_a", 1.0);
@ -147,8 +160,18 @@ fn test_spreading_beats_similarity_search() {
fn test_spreading_path_tracking() {
let mut network = ActivationNetwork::new();
network.add_edge("start".to_string(), "middle".to_string(), LinkType::Semantic, 0.9);
network.add_edge("middle".to_string(), "end".to_string(), LinkType::Semantic, 0.9);
network.add_edge(
"start".to_string(),
"middle".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"middle".to_string(),
"end".to_string(),
LinkType::Semantic,
0.9,
);
let results = network.activate("start", 1.0);
@ -167,10 +190,30 @@ fn test_spreading_convergent_activation() {
let mut network = ActivationNetwork::new();
// Create convergent paths: A -> B -> D and A -> C -> D
network.add_edge("source".to_string(), "path1".to_string(), LinkType::Semantic, 0.8);
network.add_edge("source".to_string(), "path2".to_string(), LinkType::Semantic, 0.8);
network.add_edge("path1".to_string(), "target".to_string(), LinkType::Semantic, 0.8);
network.add_edge("path2".to_string(), "target".to_string(), LinkType::Semantic, 0.8);
network.add_edge(
"source".to_string(),
"path1".to_string(),
LinkType::Semantic,
0.8,
);
network.add_edge(
"source".to_string(),
"path2".to_string(),
LinkType::Semantic,
0.8,
);
network.add_edge(
"path1".to_string(),
"target".to_string(),
LinkType::Semantic,
0.8,
);
network.add_edge(
"path2".to_string(),
"target".to_string(),
LinkType::Semantic,
0.8,
);
let results = network.activate("source", 1.0);
@ -244,13 +287,35 @@ fn test_activation_decay_per_hop() {
let results = network.activate("a", 1.0);
let b_activation = results.iter().find(|r| r.memory_id == "b").map(|r| r.activation).unwrap_or(0.0);
let c_activation = results.iter().find(|r| r.memory_id == "c").map(|r| r.activation).unwrap_or(0.0);
let d_activation = results.iter().find(|r| r.memory_id == "d").map(|r| r.activation).unwrap_or(0.0);
let b_activation = results
.iter()
.find(|r| r.memory_id == "b")
.map(|r| r.activation)
.unwrap_or(0.0);
let c_activation = results
.iter()
.find(|r| r.memory_id == "c")
.map(|r| r.activation)
.unwrap_or(0.0);
let d_activation = results
.iter()
.find(|r| r.memory_id == "d")
.map(|r| r.activation)
.unwrap_or(0.0);
// Each hop should reduce activation by decay factor (0.7)
assert!(b_activation > c_activation, "Activation should decay: b ({}) > c ({})", b_activation, c_activation);
assert!(c_activation > d_activation, "Activation should decay: c ({}) > d ({})", c_activation, d_activation);
assert!(
b_activation > c_activation,
"Activation should decay: b ({}) > c ({})",
b_activation,
c_activation
);
assert!(
c_activation > d_activation,
"Activation should decay: c ({}) > d ({})",
c_activation,
d_activation
);
// Verify approximate decay rate (allowing for floating point)
let ratio_bc = c_activation / b_activation;
@ -289,8 +354,16 @@ fn test_activation_decay_factor_configurable() {
let high_results = high_network.activate("a", 1.0);
let low_results = low_network.activate("a", 1.0);
let high_c = high_results.iter().find(|r| r.memory_id == "c").map(|r| r.activation).unwrap_or(0.0);
let low_c = low_results.iter().find(|r| r.memory_id == "c").map(|r| r.activation).unwrap_or(0.0);
let high_c = high_results
.iter()
.find(|r| r.memory_id == "c")
.map(|r| r.activation)
.unwrap_or(0.0);
let low_c = low_results
.iter()
.find(|r| r.memory_id == "c")
.map(|r| r.activation)
.unwrap_or(0.0);
assert!(
high_c > low_c,
@ -320,10 +393,8 @@ fn test_activation_distance_law() {
let results = network.activate("n0", 1.0);
// Collect activations by distance
let mut activations_by_distance: Vec<(u32, f64)> = results
.iter()
.map(|r| (r.distance, r.activation))
.collect();
let mut activations_by_distance: Vec<(u32, f64)> =
results.iter().map(|r| (r.distance, r.activation)).collect();
activations_by_distance.sort_by_key(|(d, _)| *d);
// Verify monotonic decrease with distance
@ -559,9 +630,21 @@ fn test_link_type_weights() {
let results = network.activate("event", 1.0);
// Verify different activations based on edge strength
let semantic_act = results.iter().find(|r| r.memory_id == "semantic_link").map(|r| r.activation).unwrap_or(0.0);
let temporal_act = results.iter().find(|r| r.memory_id == "temporal_link").map(|r| r.activation).unwrap_or(0.0);
let causal_act = results.iter().find(|r| r.memory_id == "causal_link").map(|r| r.activation).unwrap_or(0.0);
let semantic_act = results
.iter()
.find(|r| r.memory_id == "semantic_link")
.map(|r| r.activation)
.unwrap_or(0.0);
let temporal_act = results
.iter()
.find(|r| r.memory_id == "temporal_link")
.map(|r| r.activation)
.unwrap_or(0.0);
let causal_act = results
.iter()
.find(|r| r.memory_id == "causal_link")
.map(|r| r.activation)
.unwrap_or(0.0);
// Semantic (0.9) > Causal (0.7) > Temporal (0.5)
assert!(
@ -617,10 +700,30 @@ fn test_network_builds_from_semantic_similarity() {
// These would typically be built from embedding similarity
// Rust async ecosystem
network.add_edge("async_rust".to_string(), "tokio".to_string(), LinkType::Semantic, 0.9);
network.add_edge("async_rust".to_string(), "async_await".to_string(), LinkType::Semantic, 0.95);
network.add_edge("tokio".to_string(), "runtime".to_string(), LinkType::Semantic, 0.8);
network.add_edge("tokio".to_string(), "spawn".to_string(), LinkType::Semantic, 0.85);
network.add_edge(
"async_rust".to_string(),
"tokio".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"async_rust".to_string(),
"async_await".to_string(),
LinkType::Semantic,
0.95,
);
network.add_edge(
"tokio".to_string(),
"runtime".to_string(),
LinkType::Semantic,
0.8,
);
network.add_edge(
"tokio".to_string(),
"spawn".to_string(),
LinkType::Semantic,
0.85,
);
assert_eq!(network.node_count(), 5);
assert_eq!(network.edge_count(), 4);
@ -642,9 +745,24 @@ fn test_network_builds_from_temporal_proximity() {
// Events that happened close in time
// Morning standup sequence
network.add_edge("standup".to_string(), "jira_update".to_string(), LinkType::Temporal, 0.9);
network.add_edge("jira_update".to_string(), "code_review".to_string(), LinkType::Temporal, 0.85);
network.add_edge("code_review".to_string(), "merge_pr".to_string(), LinkType::Temporal, 0.8);
network.add_edge(
"standup".to_string(),
"jira_update".to_string(),
LinkType::Temporal,
0.9,
);
network.add_edge(
"jira_update".to_string(),
"code_review".to_string(),
LinkType::Temporal,
0.85,
);
network.add_edge(
"code_review".to_string(),
"merge_pr".to_string(),
LinkType::Temporal,
0.8,
);
// Verify temporal chain
let results = network.activate("standup", 1.0);
@ -748,6 +866,13 @@ fn test_network_batch_construction() {
let distance_1: Vec<_> = results.iter().filter(|r| r.distance == 1).collect();
let distance_2: Vec<_> = results.iter().filter(|r| r.distance == 2).collect();
assert_eq!(distance_1.len(), 3, "Should have 3 nodes at distance 1 (cargo, ownership, traits)");
assert!(distance_2.len() >= 4, "Should have at least 4 nodes at distance 2");
assert_eq!(
distance_1.len(),
3,
"Should have 3 nodes at distance 1 (cargo, ownership, traits)"
);
assert!(
distance_2.len() >= 4,
"Should have at least 4 nodes at distance 2"
);
}

View file

@ -11,12 +11,12 @@
//!
//! Based on security testing principles and fuzzing methodologies
use chrono::Utc;
use vestige_core::neuroscience::hippocampal_index::HippocampalIndex;
use vestige_core::neuroscience::spreading_activation::{
ActivationConfig, ActivationNetwork, LinkType,
};
use vestige_core::neuroscience::synaptic_tagging::SynapticTaggingSystem;
use vestige_core::neuroscience::hippocampal_index::HippocampalIndex;
use chrono::Utc;
// ============================================================================
// MALFORMED INPUT HANDLING (2 tests)
@ -30,8 +30,18 @@ fn test_adversarial_empty_inputs() {
let mut network = ActivationNetwork::new();
// Empty string node IDs
network.add_edge("".to_string(), "target".to_string(), LinkType::Semantic, 0.5);
network.add_edge("source".to_string(), "".to_string(), LinkType::Semantic, 0.5);
network.add_edge(
"".to_string(),
"target".to_string(),
LinkType::Semantic,
0.5,
);
network.add_edge(
"source".to_string(),
"".to_string(),
LinkType::Semantic,
0.5,
);
network.add_edge("".to_string(), "".to_string(), LinkType::Semantic, 0.5);
// Should handle gracefully
@ -40,8 +50,18 @@ fn test_adversarial_empty_inputs() {
let _ = results.len();
// Whitespace-only IDs
network.add_edge(" ".to_string(), "normal".to_string(), LinkType::Semantic, 0.6);
network.add_edge("\t\n".to_string(), "normal".to_string(), LinkType::Temporal, 0.5);
network.add_edge(
" ".to_string(),
"normal".to_string(),
LinkType::Semantic,
0.6,
);
network.add_edge(
"\t\n".to_string(),
"normal".to_string(),
LinkType::Temporal,
0.5,
);
let whitespace_results = network.activate(" ", 1.0);
let _ = whitespace_results.len();
@ -65,16 +85,24 @@ fn test_adversarial_extremely_long_inputs() {
let long_id_1: String = "a".repeat(10000);
let long_id_2: String = "b".repeat(10000);
network.add_edge(long_id_1.clone(), long_id_2.clone(), LinkType::Semantic, 0.8);
network.add_edge(
long_id_1.clone(),
long_id_2.clone(),
LinkType::Semantic,
0.8,
);
// Should handle long IDs
let results = network.activate(&long_id_1, 1.0);
assert_eq!(results.len(), 1, "Should find connection to long_id_2");
assert_eq!(results[0].memory_id, long_id_2, "Result should have correct long ID");
assert_eq!(
results[0].memory_id, long_id_2,
"Result should have correct long ID"
);
// Test with hippocampal index
let index = HippocampalIndex::new();
let very_long_content = "word ".repeat(50000); // ~300KB of text
let very_long_content = "word ".repeat(50000); // ~300KB of text
let result = index.index_memory(
"long_content_memory",
@ -100,18 +128,18 @@ fn test_adversarial_unicode_handling() {
// Various Unicode edge cases
let unicode_ids = vec![
"简体中文", // Chinese
"日本語テキスト", // Japanese
"한국어", // Korean
"مرحبا", // Arabic (RTL)
"שלום", // Hebrew (RTL)
"🦀🔥💯", // Emojis
"Ã̲̊", // Combining characters
"\u{200B}", // Zero-width space
"\u{FEFF}", // BOM
"a\u{0308}", // 'a' with combining umlaut
"🏳️‍🌈", // Emoji sequence with ZWJ
"\u{202E}reversed\u{202C}", // RTL override
"简体中文", // Chinese
"日本語テキスト", // Japanese
"한국어", // Korean
"مرحبا", // Arabic (RTL)
"שלום", // Hebrew (RTL)
"🦀🔥💯", // Emojis
"Ã̲̊", // Combining characters
"\u{200B}", // Zero-width space
"\u{FEFF}", // BOM
"a\u{0308}", // 'a' with combining umlaut
"🏳️‍🌈", // Emoji sequence with ZWJ
"\u{202E}reversed\u{202C}", // RTL override
];
for (i, id) in unicode_ids.iter().enumerate() {
@ -153,13 +181,13 @@ fn test_adversarial_control_characters() {
// IDs with embedded control characters
let control_ids = vec![
"before\0after", // Null byte
"line1\nline2", // Newline
"tab\there", // Tab
"return\rhere", // Carriage return
"bell\x07ring", // Bell
"escape\x1B[31m", // ANSI escape
"backspace\x08x", // Backspace
"before\0after", // Null byte
"line1\nline2", // Newline
"tab\there", // Tab
"return\rhere", // Carriage return
"bell\x07ring", // Bell
"escape\x1B[31m", // ANSI escape
"backspace\x08x", // Backspace
];
for (i, id) in control_ids.iter().enumerate() {
@ -227,9 +255,11 @@ fn test_adversarial_weight_boundaries() {
let results = network.activate("hub", 1.0);
// Higher weights should produce higher activation
let mut activations: Vec<(&str, f64)> = weight_cases.iter()
let mut activations: Vec<(&str, f64)> = weight_cases
.iter()
.filter_map(|(name, _)| {
results.iter()
results
.iter()
.find(|r| r.memory_id == format!("weight_{}", name))
.map(|r| (*name, r.activation))
})
@ -249,7 +279,8 @@ fn test_adversarial_weight_boundaries() {
}
// Zero weight edges might not propagate activation at all
let zero_activation = results.iter()
let zero_activation = results
.iter()
.find(|r| r.memory_id == "weight_zero")
.map(|r| r.activation);
@ -307,10 +338,7 @@ fn test_adversarial_config_boundaries() {
zero_hops_net.add_edge("a".to_string(), "b".to_string(), LinkType::Semantic, 0.9);
let zero_results = zero_hops_net.activate("a", 1.0);
assert!(
zero_results.is_empty(),
"Zero max_hops should find nothing"
);
assert!(zero_results.is_empty(), "Zero max_hops should find nothing");
}
// ============================================================================
@ -332,9 +360,24 @@ fn test_adversarial_cyclic_graphs() {
let mut no_cycle_net = ActivationNetwork::with_config(no_cycle_config);
// Create a simple cycle: A -> B -> C -> A
no_cycle_net.add_edge("cycle_a".to_string(), "cycle_b".to_string(), LinkType::Semantic, 0.9);
no_cycle_net.add_edge("cycle_b".to_string(), "cycle_c".to_string(), LinkType::Semantic, 0.9);
no_cycle_net.add_edge("cycle_c".to_string(), "cycle_a".to_string(), LinkType::Semantic, 0.9);
no_cycle_net.add_edge(
"cycle_a".to_string(),
"cycle_b".to_string(),
LinkType::Semantic,
0.9,
);
no_cycle_net.add_edge(
"cycle_b".to_string(),
"cycle_c".to_string(),
LinkType::Semantic,
0.9,
);
no_cycle_net.add_edge(
"cycle_c".to_string(),
"cycle_a".to_string(),
LinkType::Semantic,
0.9,
);
let start = std::time::Instant::now();
let results = no_cycle_net.activate("cycle_a", 1.0);
@ -387,10 +430,20 @@ fn test_adversarial_self_loops() {
let mut network = ActivationNetwork::new();
// Create self-loop
network.add_edge("self_loop".to_string(), "self_loop".to_string(), LinkType::Semantic, 0.9);
network.add_edge(
"self_loop".to_string(),
"self_loop".to_string(),
LinkType::Semantic,
0.9,
);
// Also connect to other nodes
network.add_edge("self_loop".to_string(), "other".to_string(), LinkType::Semantic, 0.7);
network.add_edge(
"self_loop".to_string(),
"other".to_string(),
LinkType::Semantic,
0.7,
);
let start = std::time::Instant::now();
let results = network.activate("self_loop", 1.0);
@ -423,7 +476,12 @@ fn test_adversarial_special_numeric_values() {
// We're testing that the system doesn't crash
// Normal edge for baseline
network.add_edge("normal".to_string(), "target".to_string(), LinkType::Semantic, 0.8);
network.add_edge(
"normal".to_string(),
"target".to_string(),
LinkType::Semantic,
0.8,
);
// Test activation with edge case values
// (The implementation should clamp or validate these)
@ -457,5 +515,8 @@ fn test_adversarial_special_numeric_values() {
// Edge should still exist and be valid
let assoc = network.get_associations("normal");
assert!(!assoc.is_empty(), "Edge should still exist after negative reinforce attempt");
assert!(
!assoc.is_empty(),
"Edge should still exist after negative reinforce attempt"
);
}

View file

@ -13,15 +13,13 @@
//! Based on Chaos Engineering principles (Netflix, 2011)
use chrono::{Duration, Utc};
use vestige_core::neuroscience::hippocampal_index::{HippocampalIndex, IndexQuery};
use vestige_core::neuroscience::spreading_activation::{
ActivationConfig, ActivationNetwork, LinkType,
};
use vestige_core::neuroscience::synaptic_tagging::{
CaptureWindow, ImportanceEvent, SynapticTaggingConfig, SynapticTaggingSystem,
};
use vestige_core::neuroscience::hippocampal_index::{
HippocampalIndex, IndexQuery,
};
// ============================================================================
// RANDOM OPERATION SEQUENCE TESTS (2 tests)
@ -68,11 +66,7 @@ fn test_chaos_random_operation_sequence() {
// Interleave reinforcement
if i >= 7 {
network2.reinforce_edge(
&format!("node_{}", i - 7),
&format!("node_{}", i % 50),
0.1,
);
network2.reinforce_edge(&format!("node_{}", i - 7), &format!("node_{}", i % 50), 0.1);
}
}
@ -135,7 +129,10 @@ fn test_chaos_add_remove_cycles() {
// Verify system still works
let results = network.activate(&format!("stable_{}", cycle % 20), 1.0);
assert!(!results.is_empty(), "System should remain functional during chaos");
assert!(
!results.is_empty(),
"System should remain functional during chaos"
);
}
// Final activation should still work
@ -229,7 +226,12 @@ fn test_chaos_continuous_growth_under_load() {
let mut network = ActivationNetwork::new();
// Initial seed
network.add_edge("root".to_string(), "child_0".to_string(), LinkType::Semantic, 0.8);
network.add_edge(
"root".to_string(),
"child_0".to_string(),
LinkType::Semantic,
0.8,
);
// Continuously grow while querying
for iteration in 0..500 {
@ -270,10 +272,7 @@ fn test_chaos_continuous_growth_under_load() {
);
let final_results = network.activate("root", 1.0);
assert!(
!final_results.is_empty(),
"Final activation should succeed"
);
assert!(!final_results.is_empty(), "Final activation should succeed");
}
// ============================================================================
@ -286,8 +285,8 @@ fn test_chaos_continuous_growth_under_load() {
#[test]
fn test_chaos_deep_chain_handling() {
let config = ActivationConfig {
decay_factor: 0.95, // High to allow deep traversal
max_hops: 100, // Allow deep exploration
decay_factor: 0.95, // High to allow deep traversal
max_hops: 100, // Allow deep exploration
min_threshold: 0.001, // Low threshold
allow_cycles: false,
};
@ -299,7 +298,7 @@ fn test_chaos_deep_chain_handling() {
format!("deep_{}", i),
format!("deep_{}", i + 1),
LinkType::Semantic,
0.99, // Very strong links
0.99, // Very strong links
);
}
@ -389,21 +388,21 @@ fn test_chaos_high_fanout_handling() {
/// Validates that the capture window handles edge cases correctly.
#[test]
fn test_chaos_capture_window_edge_cases() {
let window = CaptureWindow::new(9.0, 2.0); // 9 hours back, 2 forward
let window = CaptureWindow::new(9.0, 2.0); // 9 hours back, 2 forward
let event_time = Utc::now();
// Test exact boundary conditions
let test_cases = vec![
// (hours offset, expected in window)
(0.0, true), // Exactly at event
(8.99, true), // Just inside back window
(9.0, true), // At back boundary
(9.01, false), // Just outside back window
(-1.99, true), // Just inside forward window
(-2.0, true), // At forward boundary
(-2.01, false), // Just outside forward window
(100.0, false), // Way outside
(-100.0, false), // Way outside forward
(0.0, true), // Exactly at event
(8.99, true), // Just inside back window
(9.0, true), // At back boundary
(9.01, false), // Just outside back window
(-1.99, true), // Just inside forward window
(-2.0, true), // At forward boundary
(-2.01, false), // Just outside forward window
(100.0, false), // Way outside
(-100.0, false), // Way outside forward
];
for (hours_offset, expected) in test_cases {
@ -441,7 +440,7 @@ fn test_chaos_ancient_memories() {
let mut stc = SynapticTaggingSystem::with_config(config);
// Tag memories at various ages
stc.tag_memory("very_old"); // Will be tagged "now" for testing
stc.tag_memory("very_old"); // Will be tagged "now" for testing
stc.tag_memory("old");
stc.tag_memory("recent");
@ -478,11 +477,17 @@ fn test_chaos_isolated_subsystem_failures() {
// Query non-existent node should return empty, not crash
let results = network.activate("nonexistent", 1.0);
assert!(results.is_empty(), "Non-existent node should return empty results");
assert!(
results.is_empty(),
"Non-existent node should return empty results"
);
// System should still work after "failed" query
let valid_results = network.activate("a", 1.0);
assert!(!valid_results.is_empty(), "System should work after handling missing node");
assert!(
!valid_results.is_empty(),
"System should work after handling missing node"
);
// Test 2: STC with edge case inputs
let mut stc = SynapticTaggingSystem::new();

View file

@ -9,14 +9,14 @@
//!
//! Based on mathematical foundations of memory systems and neuroscience
use vestige_core::neuroscience::spreading_activation::{
ActivationConfig, ActivationNetwork, LinkType,
};
use chrono::{Duration, Utc};
use std::collections::HashMap;
use vestige_core::neuroscience::hippocampal_index::{
BarcodeGenerator, HippocampalIndex, INDEX_EMBEDDING_DIM,
};
use chrono::{Duration, Utc};
use std::collections::HashMap;
use vestige_core::neuroscience::spreading_activation::{
ActivationConfig, ActivationNetwork, LinkType,
};
// ============================================================================
// EXPONENTIAL DECAY VALIDATION (1 test)
@ -43,7 +43,7 @@ fn test_math_exponential_decay_law() {
format!("node_{}", i),
format!("node_{}", i + 1),
LinkType::Semantic,
1.0, // Unit weight to isolate decay effect
1.0, // Unit weight to isolate decay effect
);
}
@ -218,10 +218,10 @@ fn test_math_activation_bounds() {
// Total activation should be bounded
// (for a tree with decay d, total <= 1 / (1 - d) for geometric series)
let total_activation: f64 = results.iter().map(|r| r.activation).sum();
let theoretical_max = 1.0 / (1.0 - 0.8); // = 5.0 for infinite series
let theoretical_max = 1.0 / (1.0 - 0.8); // = 5.0 for infinite series
assert!(
total_activation < theoretical_max * 3.0, // Allow margin for fan-out and multi-source
total_activation < theoretical_max * 3.0, // Allow margin for fan-out and multi-source
"Total activation should be bounded: {} < {}",
total_activation,
theoretical_max * 3.0
@ -276,7 +276,8 @@ fn test_math_barcode_statistics() {
// Test 3: Content fingerprints should be mostly unique
// (with 10000 samples, collision probability is low for good hash)
let unique_fingerprints: std::collections::HashSet<u32> = fingerprints.iter().copied().collect();
let unique_fingerprints: std::collections::HashSet<u32> =
fingerprints.iter().copied().collect();
let uniqueness_ratio = unique_fingerprints.len() as f64 / num_barcodes as f64;
assert!(
@ -322,9 +323,7 @@ fn test_math_embedding_dimensions() {
let now = Utc::now();
// Create full-size embedding (384 dimensions)
let full_embedding: Vec<f32> = (0..384)
.map(|i| (i as f32 / 384.0).sin())
.collect();
let full_embedding: Vec<f32> = (0..384).map(|i| (i as f32 / 384.0).sin()).collect();
// Index memory with embedding
let result = index.index_memory(
@ -340,8 +339,7 @@ fn test_math_embedding_dimensions() {
// Verify index stats show correct dimensions
let stats = index.stats();
assert_eq!(
stats.index_dimensions,
INDEX_EMBEDDING_DIM,
stats.index_dimensions, INDEX_EMBEDDING_DIM,
"Index should use compressed embedding dimension ({})",
INDEX_EMBEDDING_DIM
);

View file

@ -10,6 +10,10 @@
//! Each test demonstrates a capability that traditional systems cannot match.
use chrono::{Duration, Utc};
use std::collections::{HashMap, HashSet};
use vestige_core::neuroscience::hippocampal_index::{
HippocampalIndex, INDEX_EMBEDDING_DIM, IndexQuery,
};
use vestige_core::neuroscience::spreading_activation::{
ActivationConfig, ActivationNetwork, LinkType,
};
@ -17,10 +21,6 @@ use vestige_core::neuroscience::synaptic_tagging::{
CaptureWindow, ImportanceEvent, ImportanceEventType, SynapticTaggingConfig,
SynapticTaggingSystem,
};
use vestige_core::neuroscience::hippocampal_index::{
HippocampalIndex, IndexQuery, INDEX_EMBEDDING_DIM,
};
use std::collections::{HashMap, HashSet};
// ============================================================================
// RETROACTIVE IMPORTANCE - UNIQUE TO VESTIGE (1 test)
@ -43,7 +43,7 @@ fn test_proof_retroactive_importance_unique() {
min_tag_strength: 0.2,
max_cluster_size: 100,
enable_clustering: true,
auto_decay: false, // Disable for test stability
auto_decay: false, // Disable for test stability
cleanup_interval_hours: 24.0,
};
@ -75,7 +75,7 @@ fn test_proof_retroactive_importance_unique() {
event_type: ImportanceEventType::EmotionalContent,
memory_id: Some("bob_departure".to_string()),
timestamp: Utc::now(),
strength: 1.0, // Maximum importance
strength: 1.0, // Maximum importance
context: Some("BREAKING: Bob is leaving the company!".to_string()),
};
@ -90,7 +90,8 @@ fn test_proof_retroactive_importance_unique() {
);
// 2. Earlier Bob-related memories should be captured
let captured_ids: HashSet<_> = capture_result.captured_memories
let captured_ids: HashSet<_> = capture_result
.captured_memories
.iter()
.map(|c| c.memory_id.as_str())
.collect();
@ -170,20 +171,51 @@ fn test_proof_multi_hop_beats_similarity() {
let mut network = ActivationNetwork::with_config(config);
// Create the knowledge chain (domain knowledge graph)
network.add_edge("memory_leaks".to_string(), "reference_counting".to_string(), LinkType::Causal, 0.9);
network.add_edge("reference_counting".to_string(), "arc_weak".to_string(), LinkType::Semantic, 0.85);
network.add_edge("arc_weak".to_string(), "cyclic_references".to_string(), LinkType::Semantic, 0.9);
network.add_edge("cyclic_references".to_string(), "solution_weak_refs".to_string(), LinkType::Semantic, 0.95);
network.add_edge(
"memory_leaks".to_string(),
"reference_counting".to_string(),
LinkType::Causal,
0.9,
);
network.add_edge(
"reference_counting".to_string(),
"arc_weak".to_string(),
LinkType::Semantic,
0.85,
);
network.add_edge(
"arc_weak".to_string(),
"cyclic_references".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"cyclic_references".to_string(),
"solution_weak_refs".to_string(),
LinkType::Semantic,
0.95,
);
// Also add some direct but less relevant connections
network.add_edge("memory_leaks".to_string(), "valgrind".to_string(), LinkType::Semantic, 0.7);
network.add_edge("memory_leaks".to_string(), "profiling".to_string(), LinkType::Semantic, 0.6);
network.add_edge(
"memory_leaks".to_string(),
"valgrind".to_string(),
LinkType::Semantic,
0.7,
);
network.add_edge(
"memory_leaks".to_string(),
"profiling".to_string(),
LinkType::Semantic,
0.6,
);
// === SPREADING ACTIVATION SEARCH ===
let spreading_results = network.activate("memory_leaks", 1.0);
// Collect what spreading activation found
let spreading_found: HashSet<_> = spreading_results.iter()
let spreading_found: HashSet<_> = spreading_results
.iter()
.map(|r| r.memory_id.as_str())
.collect();
@ -198,7 +230,9 @@ fn test_proof_multi_hop_beats_similarity() {
impl MockSimilaritySearch {
fn search(&self, query: &str, top_k: usize) -> Vec<(&str, f64)> {
let query_emb = self.embeddings.get(query).unwrap();
let mut results: Vec<_> = self.embeddings.iter()
let mut results: Vec<_> = self
.embeddings
.iter()
.filter(|(k, _)| k.as_str() != query)
.map(|(k, emb)| {
let sim = cosine_sim(query_emb, emb);
@ -223,17 +257,27 @@ fn test_proof_multi_hop_beats_similarity() {
}
// Create mock embeddings where memory_leaks and cyclic_references are ORTHOGONAL
let mut mock = MockSimilaritySearch { embeddings: HashMap::new() };
mock.embeddings.insert("memory_leaks".to_string(), vec![1.0, 0.0, 0.0, 0.0]);
mock.embeddings.insert("reference_counting".to_string(), vec![0.7, 0.7, 0.0, 0.0]);
mock.embeddings.insert("arc_weak".to_string(), vec![0.0, 0.7, 0.7, 0.0]);
mock.embeddings.insert("cyclic_references".to_string(), vec![0.0, 0.0, 0.0, 1.0]); // ORTHOGONAL!
mock.embeddings.insert("solution_weak_refs".to_string(), vec![0.0, 0.0, 0.2, 0.9]);
mock.embeddings.insert("valgrind".to_string(), vec![0.8, 0.2, 0.0, 0.0]); // Similar
mock.embeddings.insert("profiling".to_string(), vec![0.6, 0.4, 0.0, 0.0]); // Similar
let mut mock = MockSimilaritySearch {
embeddings: HashMap::new(),
};
mock.embeddings
.insert("memory_leaks".to_string(), vec![1.0, 0.0, 0.0, 0.0]);
mock.embeddings
.insert("reference_counting".to_string(), vec![0.7, 0.7, 0.0, 0.0]);
mock.embeddings
.insert("arc_weak".to_string(), vec![0.0, 0.7, 0.7, 0.0]);
mock.embeddings
.insert("cyclic_references".to_string(), vec![0.0, 0.0, 0.0, 1.0]); // ORTHOGONAL!
mock.embeddings
.insert("solution_weak_refs".to_string(), vec![0.0, 0.0, 0.2, 0.9]);
mock.embeddings
.insert("valgrind".to_string(), vec![0.8, 0.2, 0.0, 0.0]); // Similar
mock.embeddings
.insert("profiling".to_string(), vec![0.6, 0.4, 0.0, 0.0]); // Similar
let similarity_results = mock.search("memory_leaks", 10);
let similarity_found: HashSet<_> = similarity_results.iter()
let similarity_found: HashSet<_> = similarity_results
.iter()
.filter(|(_, sim)| *sim > 0.3)
.map(|(id, _)| *id)
.collect();
@ -257,13 +301,16 @@ fn test_proof_multi_hop_beats_similarity() {
);
// Verify the discovery path
let solution_result = spreading_results.iter()
let solution_result = spreading_results
.iter()
.find(|r| r.memory_id == "solution_weak_refs")
.expect("Should find solution");
assert_eq!(solution_result.distance, 4, "Solution is 4 hops away");
assert!(
solution_result.path.contains(&"cyclic_references".to_string()),
solution_result
.path
.contains(&"cyclic_references".to_string()),
"Path should include cyclic_references"
);
}
@ -291,8 +338,12 @@ fn test_proof_hippocampal_indexing_efficiency() {
let _ = index.index_memory(
&format!("memory_{}", i),
&format!("This is memory number {} with content about topic {} and subtopic {}",
i, i % 50, i % 10),
&format!(
"This is memory number {} with content about topic {} and subtopic {}",
i,
i % 50,
i % 10
),
"fact",
now,
Some(embedding),
@ -356,7 +407,7 @@ fn test_proof_hippocampal_indexing_efficiency() {
);
// 5. Memory efficiency
let memory_per_full = 384 * 4; // 384 floats * 4 bytes
let memory_per_full = 384 * 4; // 384 floats * 4 bytes
let memory_per_index = INDEX_EMBEDDING_DIM * 4;
let savings_per_memory = memory_per_full - memory_per_index;
let total_savings = savings_per_memory * NUM_MEMORIES;
@ -389,10 +440,10 @@ fn test_proof_temporal_capture_accuracy() {
// Memories encoded BEFORE the important event can be captured
let backward_tests = vec![
(Duration::hours(1), true, 1.0), // 1h before - should be captured with high prob
(Duration::hours(4), true, 0.9), // 4h before - should be captured
(Duration::hours(8), true, 0.5), // 8h before - edge of window
(Duration::hours(9), true, 0.0), // 9h before - at boundary
(Duration::hours(1), true, 1.0), // 1h before - should be captured with high prob
(Duration::hours(4), true, 0.9), // 4h before - should be captured
(Duration::hours(8), true, 0.5), // 8h before - edge of window
(Duration::hours(9), true, 0.0), // 9h before - at boundary
(Duration::hours(10), false, 0.0), // 10h before - outside window
];
@ -401,7 +452,8 @@ fn test_proof_temporal_capture_accuracy() {
let in_window = window.is_in_window(memory_time, event_time);
assert_eq!(
in_window, *should_be_in_window,
in_window,
*should_be_in_window,
"PROOF: Memory {}h before event: in_window={}, expected={}",
offset.num_hours(),
in_window,
@ -421,10 +473,10 @@ fn test_proof_temporal_capture_accuracy() {
// Brief period for memories encoded shortly after
let forward_tests = vec![
(Duration::minutes(30), true), // 30min after - in window
(Duration::hours(1), true), // 1h after - in window
(Duration::hours(2), true), // 2h after - at boundary
(Duration::hours(3), false), // 3h after - outside
(Duration::minutes(30), true), // 30min after - in window
(Duration::hours(1), true), // 1h after - in window
(Duration::hours(2), true), // 2h after - at boundary
(Duration::hours(3), false), // 3h after - outside
];
for (offset, should_be_in_window) in &forward_tests {
@ -432,7 +484,8 @@ fn test_proof_temporal_capture_accuracy() {
let in_window = window.is_in_window(memory_time, event_time);
assert_eq!(
in_window, *should_be_in_window,
in_window,
*should_be_in_window,
"PROOF: Memory {}min after event: in_window={}, expected={}",
offset.num_minutes(),
in_window,
@ -472,7 +525,10 @@ fn test_proof_comprehensive_capability_summary() {
let result = stc.trigger_prp(event);
let has_retroactive = result.has_captures();
assert!(has_retroactive, "Capability 1: Retroactive importance - PROVEN");
assert!(
has_retroactive,
"Capability 1: Retroactive importance - PROVEN"
);
// === CAPABILITY 2: Multi-Hop Discovery ===
// Traditional: NO (1-hop only) | Vestige: YES (configurable depth)
@ -492,25 +548,36 @@ fn test_proof_comprehensive_capability_summary() {
let results = network.activate("a", 1.0);
let max_distance = results.iter().map(|r| r.distance).max().unwrap_or(0);
assert!(max_distance >= 4, "Capability 2: Multi-hop discovery (4+ hops) - PROVEN");
assert!(
max_distance >= 4,
"Capability 2: Multi-hop discovery (4+ hops) - PROVEN"
);
// === CAPABILITY 3: Compressed Hippocampal Index ===
// Traditional: Full embeddings | Vestige: Compressed index
let compression = 384.0 / INDEX_EMBEDDING_DIM as f64;
assert!(compression >= 2.0, "Capability 3: Hippocampal compression ({:.1}x) - PROVEN", compression);
assert!(
compression >= 2.0,
"Capability 3: Hippocampal compression ({:.1}x) - PROVEN",
compression
);
// === CAPABILITY 4: Asymmetric Temporal Windows ===
// Traditional: NO temporal reasoning | Vestige: Biologically-grounded windows
let window = CaptureWindow::new(9.0, 2.0);
let asymmetric = 9.0 / 2.0;
assert!(asymmetric > 4.0, "Capability 4: Asymmetric capture windows ({}:1) - PROVEN", asymmetric);
assert!(
asymmetric > 4.0,
"Capability 4: Asymmetric capture windows ({}:1) - PROVEN",
asymmetric
);
// === CAPABILITY 5: Path Tracking ===
// Traditional: Returns items only | Vestige: Returns full association paths
let path_result = &results[results.len() - 1]; // Furthest result
let path_result = &results[results.len() - 1]; // Furthest result
let has_path = !path_result.path.is_empty();
assert!(has_path, "Capability 5: Association path tracking - PROVEN");
@ -518,10 +585,30 @@ fn test_proof_comprehensive_capability_summary() {
// Traditional: Single similarity metric | Vestige: Multiple link types
let mut typed_network = ActivationNetwork::new();
typed_network.add_edge("event".to_string(), "cause".to_string(), LinkType::Causal, 0.9);
typed_network.add_edge("event".to_string(), "time".to_string(), LinkType::Temporal, 0.9);
typed_network.add_edge("event".to_string(), "concept".to_string(), LinkType::Semantic, 0.9);
typed_network.add_edge("event".to_string(), "location".to_string(), LinkType::Spatial, 0.9);
typed_network.add_edge(
"event".to_string(),
"cause".to_string(),
LinkType::Causal,
0.9,
);
typed_network.add_edge(
"event".to_string(),
"time".to_string(),
LinkType::Temporal,
0.9,
);
typed_network.add_edge(
"event".to_string(),
"concept".to_string(),
LinkType::Semantic,
0.9,
);
typed_network.add_edge(
"event".to_string(),
"location".to_string(),
LinkType::Spatial,
0.9,
);
let typed_results = typed_network.activate("event", 1.0);
let link_types: HashSet<_> = typed_results.iter().map(|r| r.link_type).collect();

View file

@ -10,6 +10,10 @@
//! Each test cites the specific research findings being validated.
use chrono::{Duration, Utc};
use std::collections::HashSet;
use vestige_core::neuroscience::hippocampal_index::{
HippocampalIndex, HippocampalIndexConfig, IndexQuery,
};
use vestige_core::neuroscience::spreading_activation::{
ActivationConfig, ActivationNetwork, LinkType,
};
@ -17,10 +21,6 @@ use vestige_core::neuroscience::synaptic_tagging::{
CaptureWindow, ImportanceEvent, ImportanceEventType, SynapticTaggingConfig,
SynapticTaggingSystem,
};
use vestige_core::neuroscience::hippocampal_index::{
HippocampalIndex, HippocampalIndexConfig, IndexQuery,
};
use std::collections::HashSet;
// ============================================================================
// COLLINS & LOFTUS (1975) SPREADING ACTIVATION VALIDATION (1 test)
@ -39,7 +39,7 @@ use std::collections::HashSet;
#[test]
fn test_research_collins_loftus_spreading_activation() {
let config = ActivationConfig {
decay_factor: 0.75, // Semantic distance decay
decay_factor: 0.75, // Semantic distance decay
max_hops: 4,
min_threshold: 0.05,
allow_cycles: false,
@ -48,29 +48,91 @@ fn test_research_collins_loftus_spreading_activation() {
// Recreate classic semantic network from the paper
// "Fire truck" example: fire_truck -> red -> roses, fire_truck -> vehicle
network.add_edge("fire_truck".to_string(), "red".to_string(), LinkType::Semantic, 0.9);
network.add_edge("fire_truck".to_string(), "vehicle".to_string(), LinkType::Semantic, 0.85);
network.add_edge("fire_truck".to_string(), "fire".to_string(), LinkType::Semantic, 0.9);
network.add_edge("red".to_string(), "roses".to_string(), LinkType::Semantic, 0.7);
network.add_edge("red".to_string(), "cherries".to_string(), LinkType::Semantic, 0.65);
network.add_edge("red".to_string(), "apples".to_string(), LinkType::Semantic, 0.7);
network.add_edge("vehicle".to_string(), "car".to_string(), LinkType::Semantic, 0.8);
network.add_edge("vehicle".to_string(), "truck".to_string(), LinkType::Semantic, 0.85);
network.add_edge("fire".to_string(), "flames".to_string(), LinkType::Semantic, 0.9);
network.add_edge("fire".to_string(), "heat".to_string(), LinkType::Semantic, 0.8);
network.add_edge(
"fire_truck".to_string(),
"red".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"fire_truck".to_string(),
"vehicle".to_string(),
LinkType::Semantic,
0.85,
);
network.add_edge(
"fire_truck".to_string(),
"fire".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"red".to_string(),
"roses".to_string(),
LinkType::Semantic,
0.7,
);
network.add_edge(
"red".to_string(),
"cherries".to_string(),
LinkType::Semantic,
0.65,
);
network.add_edge(
"red".to_string(),
"apples".to_string(),
LinkType::Semantic,
0.7,
);
network.add_edge(
"vehicle".to_string(),
"car".to_string(),
LinkType::Semantic,
0.8,
);
network.add_edge(
"vehicle".to_string(),
"truck".to_string(),
LinkType::Semantic,
0.85,
);
network.add_edge(
"fire".to_string(),
"flames".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"fire".to_string(),
"heat".to_string(),
LinkType::Semantic,
0.8,
);
// Add convergent paths (multiple routes to same concept)
network.add_edge("apples".to_string(), "fruit".to_string(), LinkType::Semantic, 0.9);
network.add_edge("cherries".to_string(), "fruit".to_string(), LinkType::Semantic, 0.9);
network.add_edge(
"apples".to_string(),
"fruit".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"cherries".to_string(),
"fruit".to_string(),
LinkType::Semantic,
0.9,
);
let results = network.activate("fire_truck", 1.0);
// Validation 1: Direct connections (distance 1) have highest activation
let red_activation = results.iter()
let red_activation = results
.iter()
.find(|r| r.memory_id == "red")
.map(|r| r.activation)
.unwrap_or(0.0);
let roses_activation = results.iter()
let roses_activation = results
.iter()
.find(|r| r.memory_id == "roses")
.map(|r| r.activation)
.unwrap_or(0.0);
@ -83,11 +145,13 @@ fn test_research_collins_loftus_spreading_activation() {
);
// Validation 2: Activation decreases with semantic distance
let distance_1: Vec<f64> = results.iter()
let distance_1: Vec<f64> = results
.iter()
.filter(|r| r.distance == 1)
.map(|r| r.activation)
.collect();
let distance_2: Vec<f64> = results.iter()
let distance_2: Vec<f64> = results
.iter()
.filter(|r| r.distance == 2)
.map(|r| r.activation)
.collect();
@ -107,7 +171,10 @@ fn test_research_collins_loftus_spreading_activation() {
assert!(reachable.contains("red"), "Should reach 'red'");
assert!(reachable.contains("vehicle"), "Should reach 'vehicle'");
assert!(reachable.contains("fire"), "Should reach 'fire'");
assert!(reachable.contains("roses"), "Should reach 'roses' through 'red'");
assert!(
reachable.contains("roses"),
"Should reach 'roses' through 'red'"
);
// Validation 4: Path information is preserved
let roses_result = results.iter().find(|r| r.memory_id == "roses").unwrap();
@ -135,7 +202,7 @@ fn test_research_collins_loftus_spreading_activation() {
#[test]
fn test_research_frey_morris_synaptic_tagging() {
let config = SynapticTaggingConfig {
capture_window: CaptureWindow::new(9.0, 2.0), // Hours: 9 back, 2 forward
capture_window: CaptureWindow::new(9.0, 2.0), // Hours: 9 back, 2 forward
prp_threshold: 0.7,
tag_lifetime_hours: 12.0,
min_tag_strength: 0.3,
@ -148,7 +215,7 @@ fn test_research_frey_morris_synaptic_tagging() {
let mut stc = SynapticTaggingSystem::with_config(config);
// Finding 1: Weak stimulation creates tags
stc.tag_memory_with_strength("weak_stim_1", 0.4); // Above min (0.3), weak
stc.tag_memory_with_strength("weak_stim_1", 0.4); // Above min (0.3), weak
stc.tag_memory_with_strength("weak_stim_2", 0.5);
let stats_after_weak = stc.stats();
@ -164,7 +231,7 @@ fn test_research_frey_morris_synaptic_tagging() {
event_type: ImportanceEventType::EmotionalContent,
memory_id: Some("strong_trigger".to_string()),
timestamp: Utc::now(),
strength: 0.95, // Above threshold (0.7)
strength: 0.95, // Above threshold (0.7)
context: Some("Strong emotional event triggers PRP".to_string()),
};
@ -245,21 +312,26 @@ fn test_research_teyler_rudy_hippocampal_indexing() {
.map(|i| ((i as f32 / 100.0) * std::f32::consts::PI).sin())
.collect();
let barcode = index.index_memory(
"episodic_memory_1",
"Detailed episodic memory content with rich context",
"episodic",
now,
Some(full_embedding.clone()),
).expect("Should create barcode");
let barcode = index
.index_memory(
"episodic_memory_1",
"Detailed episodic memory content with rich context",
"episodic",
now,
Some(full_embedding.clone()),
)
.expect("Should create barcode");
// Barcode should be a valid identifier (u64 ID)
// First barcode may have id=0, which is valid
assert!(barcode.creation_hash > 0 || barcode.content_fingerprint > 0,
"T&R Finding 1: Barcode should have valid fingerprints");
assert!(
barcode.creation_hash > 0 || barcode.content_fingerprint > 0,
"T&R Finding 1: Barcode should have valid fingerprints"
);
// Finding 2: Index points to content (content pointers)
let memory_index = index.get_index("episodic_memory_1")
let memory_index = index
.get_index("episodic_memory_1")
.expect("Should retrieve")
.expect("Should exist");
@ -348,7 +420,7 @@ fn test_research_ebbinghaus_forgetting_curve() {
let forgetting_curve = |t: f64| -> f64 {
// Ebbinghaus formula: R = e^(-t/S) where S is stability
let stability = 2.0; // Memory stability parameter
let stability = 2.0; // Memory stability parameter
(-t / stability).exp()
};
@ -368,7 +440,10 @@ fn test_research_ebbinghaus_forgetting_curve() {
// Collect activations by "age"
let mut age_activations: Vec<(u32, f64)> = Vec::new();
for t in 0..10 {
if let Some(result) = results.iter().find(|r| r.memory_id == format!("memory_age_{}", t)) {
if let Some(result) = results
.iter()
.find(|r| r.memory_id == format!("memory_age_{}", t))
{
age_activations.push((t, result.activation));
}
}
@ -387,7 +462,8 @@ fn test_research_ebbinghaus_forgetting_curve() {
// Check that differences decrease over time
if age_activations.len() >= 3 {
let diff_early = age_activations[0].1 - age_activations[1].1;
let diff_late = age_activations[age_activations.len() - 2].1 - age_activations[age_activations.len() - 1].1;
let diff_late = age_activations[age_activations.len() - 2].1
- age_activations[age_activations.len() - 1].1;
// Early differences should be larger (rapid initial forgetting)
// But we need to account for near-zero values at the end
@ -403,8 +479,18 @@ fn test_research_ebbinghaus_forgetting_curve() {
// Finding 3: Test overlearning (reinforcement)
let mut overlearned_network = ActivationNetwork::new();
overlearned_network.add_edge("study".to_string(), "normal_learning".to_string(), LinkType::Semantic, 0.5);
overlearned_network.add_edge("study".to_string(), "overlearned".to_string(), LinkType::Semantic, 0.5);
overlearned_network.add_edge(
"study".to_string(),
"normal_learning".to_string(),
LinkType::Semantic,
0.5,
);
overlearned_network.add_edge(
"study".to_string(),
"overlearned".to_string(),
LinkType::Semantic,
0.5,
);
// Simulate overlearning with multiple reinforcements
for _ in 0..5 {
@ -413,11 +499,13 @@ fn test_research_ebbinghaus_forgetting_curve() {
let study_results = overlearned_network.activate("study", 1.0);
let normal_act = study_results.iter()
let normal_act = study_results
.iter()
.find(|r| r.memory_id == "normal_learning")
.map(|r| r.activation)
.unwrap_or(0.0);
let overlearned_act = study_results.iter()
let overlearned_act = study_results
.iter()
.find(|r| r.memory_id == "overlearned")
.map(|r| r.activation)
.unwrap_or(0.0);
@ -447,7 +535,7 @@ fn test_research_ebbinghaus_forgetting_curve() {
#[test]
fn test_research_fsrs6_properties() {
// FSRS-6 default weights
const W20: f64 = 0.1542; // Forgetting curve exponent
const W20: f64 = 0.1542; // Forgetting curve exponent
// FSRS-6 retrievability formula
fn fsrs6_retrievability(stability: f64, elapsed_days: f64, w20: f64) -> f64 {
@ -455,7 +543,9 @@ fn test_research_fsrs6_properties() {
return 1.0;
}
let factor = 0.9_f64.powf(-1.0 / w20) - 1.0;
(1.0 + factor * elapsed_days / stability).powf(-w20).clamp(0.0, 1.0)
(1.0 + factor * elapsed_days / stability)
.powf(-w20)
.clamp(0.0, 1.0)
}
// Property 1: R = 0.9 when t = S (by design)

View file

@ -15,8 +15,8 @@
use chrono::{Duration, Utc};
use vestige_core::{
advanced::dreams::{
ActivityTracker, ConnectionGraph, ConnectionReason, ConsolidationScheduler,
DreamConfig, DreamMemory, MemoryDreamer,
ActivityTracker, ConnectionGraph, ConnectionReason, ConsolidationScheduler, DreamConfig,
DreamMemory, MemoryDreamer,
},
consolidation::SleepConsolidation,
};
@ -82,10 +82,7 @@ fn test_consolidation_detects_idle_periods() {
// Initially should be idle (no activity)
let stats = scheduler.get_activity_stats();
assert!(
stats.is_idle,
"Fresh scheduler should be idle"
);
assert!(stats.is_idle, "Fresh scheduler should be idle");
// Record activity - should no longer be idle
scheduler.record_activity();
@ -93,10 +90,7 @@ fn test_consolidation_detects_idle_periods() {
scheduler.record_activity();
let active_stats = scheduler.get_activity_stats();
assert!(
!active_stats.is_idle,
"Should not be idle after activity"
);
assert!(!active_stats.is_idle, "Should not be idle after activity");
assert_eq!(
active_stats.total_events, 3,
"Should track 3 activity events"
@ -176,9 +170,24 @@ fn test_connections_form_between_related_memories() {
let mut graph = ConnectionGraph::new();
// Add connections simulating discovered relationships
graph.add_connection("rust_async", "tokio_runtime", 0.9, ConnectionReason::Semantic);
graph.add_connection("tokio_runtime", "green_threads", 0.8, ConnectionReason::Semantic);
graph.add_connection("rust_async", "futures_crate", 0.85, ConnectionReason::SharedConcepts);
graph.add_connection(
"rust_async",
"tokio_runtime",
0.9,
ConnectionReason::Semantic,
);
graph.add_connection(
"tokio_runtime",
"green_threads",
0.8,
ConnectionReason::Semantic,
);
graph.add_connection(
"rust_async",
"futures_crate",
0.85,
ConnectionReason::SharedConcepts,
);
// Verify graph structure
let stats = graph.get_stats();
@ -329,9 +338,18 @@ fn test_pruning_removes_weak_memories() {
// Verify the config accessor works
let config = consolidation.config();
assert!(!config.enable_pruning, "Default should have pruning disabled");
assert!(config.pruning_threshold > 0.0, "Should have a threshold configured");
assert!(config.pruning_min_age_days > 0, "Should have a min age configured");
assert!(
!config.enable_pruning,
"Default should have pruning disabled"
);
assert!(
config.pruning_threshold > 0.0,
"Should have a threshold configured"
);
assert!(
config.pruning_min_age_days > 0,
"Should have a min age configured"
);
}
// ============================================================================
@ -419,22 +437,13 @@ fn test_retention_calculation() {
// Full retrieval, max storage
let r2 = consolidation.calculate_retention(10.0, 1.0);
assert!(
(r2 - 1.0).abs() < 0.01,
"Max everything should be ~1.0"
);
assert!((r2 - 1.0).abs() < 0.01, "Max everything should be ~1.0");
// Low retrieval, max storage
let r3 = consolidation.calculate_retention(10.0, 0.0);
assert!(
(r3 - 0.3).abs() < 0.01,
"Low retrieval should cap at ~0.3"
);
assert!((r3 - 0.3).abs() < 0.01, "Low retrieval should cap at ~0.3");
// Both low
let r4 = consolidation.calculate_retention(0.0, 0.0);
assert!(
r4 < 0.1,
"Both low should mean low retention"
);
assert!(r4 < 0.1, "Both low should mean low retention");
}

View file

@ -13,9 +13,9 @@
//! 5. User merges memories from multiple sources
use chrono::{DateTime, Duration, Utc};
use vestige_core::memory::IngestInput;
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use vestige_core::memory::IngestInput;
// ============================================================================
// EXPORT/IMPORT FORMAT
@ -183,7 +183,10 @@ fn test_export_serializes_memories_to_json() {
assert!(json.contains("\"metadata\""), "Should contain metadata");
// Verify content is present
assert!(json.contains("Rust ownership"), "Should contain memory content");
assert!(
json.contains("Rust ownership"),
"Should contain memory content"
);
assert!(json.contains("rust"), "Should contain tags");
// Verify FSRS state
@ -219,12 +222,21 @@ fn test_import_deserializes_json_to_memories() {
// Verify memories
let mem1 = &imported.memories[0];
assert!(mem1.content.contains("ownership"), "Content should be preserved");
assert!(mem1.tags.contains(&"rust".to_string()), "Tags should be preserved");
assert!(
mem1.content.contains("ownership"),
"Content should be preserved"
);
assert!(
mem1.tags.contains(&"rust".to_string()),
"Tags should be preserved"
);
assert!(mem1.stability > 0.0, "Stability should be preserved");
// Verify metadata
assert_eq!(imported.metadata.get("project"), Some(&"vestige".to_string()));
assert_eq!(
imported.metadata.get("project"),
Some(&"vestige".to_string())
);
}
// ============================================================================
@ -268,12 +280,24 @@ fn test_roundtrip_preserves_all_data() {
assert_eq!(imported.content, original.content, "Content should match");
assert_eq!(imported.node_type, original.node_type, "Type should match");
assert_eq!(imported.tags, original.tags, "Tags should match");
assert_eq!(imported.stability, original.stability, "Stability should match");
assert_eq!(imported.difficulty, original.difficulty, "Difficulty should match");
assert_eq!(
imported.stability, original.stability,
"Stability should match"
);
assert_eq!(
imported.difficulty, original.difficulty,
"Difficulty should match"
);
assert_eq!(imported.reps, original.reps, "Reps should match");
assert_eq!(imported.lapses, original.lapses, "Lapses should match");
assert_eq!(imported.sentiment_score, original.sentiment_score, "Sentiment score should match");
assert_eq!(imported.sentiment_magnitude, original.sentiment_magnitude, "Sentiment magnitude should match");
assert_eq!(
imported.sentiment_score, original.sentiment_score,
"Sentiment score should match"
);
assert_eq!(
imported.sentiment_magnitude, original.sentiment_magnitude,
"Sentiment magnitude should match"
);
assert_eq!(imported.source, original.source, "Source should match");
}
@ -290,11 +314,13 @@ fn test_roundtrip_preserves_all_data() {
#[test]
fn test_selective_export_by_tags() {
// Create memories with different tags
let memories = [ExportedMemory::new("Rust ownership", "concept", vec!["rust", "memory"]),
let memories = [
ExportedMemory::new("Rust ownership", "concept", vec!["rust", "memory"]),
ExportedMemory::new("Python generators", "concept", vec!["python", "generators"]),
ExportedMemory::new("Rust borrowing", "concept", vec!["rust", "borrowing"]),
ExportedMemory::new("JavaScript async", "concept", vec!["javascript", "async"]),
ExportedMemory::new("Rust async", "concept", vec!["rust", "async"])];
ExportedMemory::new("Rust async", "concept", vec!["rust", "async"]),
];
// Filter by "rust" tag
let rust_memories: Vec<_> = memories
@ -307,12 +333,14 @@ fn test_selective_export_by_tags() {
// Filter by multiple tags (rust AND async)
let rust_async_memories: Vec<_> = memories
.iter()
.filter(|m| {
m.tags.contains(&"rust".to_string()) && m.tags.contains(&"async".to_string())
})
.filter(|m| m.tags.contains(&"rust".to_string()) && m.tags.contains(&"async".to_string()))
.collect();
assert_eq!(rust_async_memories.len(), 1, "Should filter to 1 Rust async memory");
assert_eq!(
rust_async_memories.len(),
1,
"Should filter to 1 Rust async memory"
);
assert!(rust_async_memories[0].content.contains("Rust async"));
// Export filtered
@ -338,8 +366,14 @@ fn test_selective_export_by_tags() {
fn test_import_merges_with_existing_data() {
// Simulate existing memories
let existing: HashMap<String, ExportedMemory> = [
("1".to_string(), ExportedMemory::new("Rust ownership memory safety", "concept", vec!["rust"])),
("2".to_string(), ExportedMemory::new("Rust borrowing rules explained", "concept", vec!["rust"])),
(
"1".to_string(),
ExportedMemory::new("Rust ownership memory safety", "concept", vec!["rust"]),
),
(
"2".to_string(),
ExportedMemory::new("Rust borrowing rules explained", "concept", vec!["rust"]),
),
]
.into_iter()
.collect();
@ -427,7 +461,10 @@ fn test_empty_bundle_handling() {
// Serialize empty bundle
let json = bundle.to_json().unwrap();
assert!(json.contains("\"memories\": []"), "Should have empty memories array");
assert!(
json.contains("\"memories\": []"),
"Should have empty memories array"
);
// Deserialize and verify
let imported = ExportBundle::from_json(&json).unwrap();

View file

@ -12,9 +12,9 @@
//! 5. User benefits from improved recall over time
use vestige_core::{
consolidation::SleepConsolidation,
fsrs::{FSRSScheduler, LearningState, Rating},
memory::{IngestInput, RecallInput, SearchMode},
consolidation::SleepConsolidation,
};
// ============================================================================
@ -139,7 +139,10 @@ fn test_review_strengthens_memory_with_fsrs() {
let result = scheduler.review(&initial_state, Rating::Good, 0.0, None);
// Stability should be set from initial parameters
assert!(result.state.stability > 0.0, "Stability should be positive after review");
assert!(
result.state.stability > 0.0,
"Stability should be positive after review"
);
// Reps should increase
assert_eq!(result.state.reps, 1, "Reps should increase after review");
@ -160,7 +163,10 @@ fn test_review_strengthens_memory_with_fsrs() {
again_result.interval <= second_result.interval,
"Again rating should reduce interval"
);
assert_eq!(again_result.state.lapses, 1, "Lapses should increase on Again");
assert_eq!(
again_result.state.lapses, 1,
"Lapses should increase on Again"
);
}
// ============================================================================
@ -184,7 +190,11 @@ fn test_memory_lifecycle_follows_expected_pattern() {
// Simulate 10 successful reviews
for i in 0..10 {
let elapsed = if i == 0 { 0.0 } else { intervals.last().copied().unwrap_or(1) as f64 };
let elapsed = if i == 0 {
0.0
} else {
intervals.last().copied().unwrap_or(1) as f64
};
let result = scheduler.review(&state, Rating::Good, elapsed, None);
intervals.push(result.interval);
state = result.state;
@ -192,7 +202,10 @@ fn test_memory_lifecycle_follows_expected_pattern() {
// Verify lifecycle progression
assert!(state.reps >= 10, "Should have at least 10 reps");
assert_eq!(state.lapses, 0, "Should have no lapses with all Good ratings");
assert_eq!(
state.lapses, 0,
"Should have no lapses with all Good ratings"
);
// Verify interval growth (early intervals may be similar, but should eventually grow)
let early_avg: f64 = intervals[..3].iter().map(|&i| i as f64).sum::<f64>() / 3.0;
@ -260,10 +273,7 @@ fn test_sentiment_affects_memory_consolidation() {
// Test promotion boost
let boosted = consolidation.promotion_boost(5.0);
assert!(
boosted > 5.0,
"Promotion should increase storage strength"
);
assert!(boosted > 5.0, "Promotion should increase storage strength");
assert!(
boosted <= 10.0,
"Promotion should cap at max storage strength"

View file

@ -13,8 +13,8 @@
//! 5. User benefits from context-aware assistance
use vestige_core::advanced::intent::{
ActionType, DetectedIntent, IntentDetector, LearningLevel, MaintenanceType,
OptimizationType, UserAction,
ActionType, DetectedIntent, IntentDetector, LearningLevel, MaintenanceType, OptimizationType,
UserAction,
};
// ============================================================================
@ -89,7 +89,10 @@ fn test_debugging_intent_detection() {
// Check intent properties
match &result.primary_intent {
DetectedIntent::Debugging { suspected_area, symptoms } => {
DetectedIntent::Debugging {
suspected_area,
symptoms,
} => {
assert!(!suspected_area.is_empty(), "Should identify suspected area");
// Symptoms may or may not be captured depending on action order
}
@ -129,10 +132,7 @@ fn test_learning_intent_detection() {
_ => {
// Learning actions should typically detect learning intent
// But other intents may score higher in some cases
assert!(
result.confidence > 0.0,
"Should detect some intent"
);
assert!(result.confidence > 0.0, "Should detect some intent");
}
}
@ -169,7 +169,9 @@ fn test_refactoring_intent_detection() {
assert!(!target.is_empty(), "Should identify refactoring target");
assert!(!goal.is_empty(), "Should identify refactoring goal");
}
DetectedIntent::NewFeature { related_components, .. } => {
DetectedIntent::NewFeature {
related_components, ..
} => {
// Multiple edits could also suggest new feature
assert!(
related_components.len() >= 0,

View file

@ -12,10 +12,10 @@
//! 4. Activation spreads to related memories via association links
//! 5. User discovers hidden connections they didn't explicitly search for
use std::collections::HashSet;
use vestige_core::neuroscience::spreading_activation::{
ActivationConfig, ActivationNetwork, LinkType,
};
use std::collections::HashSet;
// ============================================================================
// HELPER FUNCTIONS
@ -26,17 +26,62 @@ fn create_coding_network() -> ActivationNetwork {
let mut network = ActivationNetwork::new();
// Rust ecosystem
network.add_edge("rust".to_string(), "ownership".to_string(), LinkType::Semantic, 0.95);
network.add_edge("rust".to_string(), "borrowing".to_string(), LinkType::Semantic, 0.9);
network.add_edge("rust".to_string(), "cargo".to_string(), LinkType::PartOf, 0.85);
network.add_edge("ownership".to_string(), "memory_safety".to_string(), LinkType::Causal, 0.9);
network.add_edge("borrowing".to_string(), "lifetimes".to_string(), LinkType::Semantic, 0.85);
network.add_edge(
"rust".to_string(),
"ownership".to_string(),
LinkType::Semantic,
0.95,
);
network.add_edge(
"rust".to_string(),
"borrowing".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"rust".to_string(),
"cargo".to_string(),
LinkType::PartOf,
0.85,
);
network.add_edge(
"ownership".to_string(),
"memory_safety".to_string(),
LinkType::Causal,
0.9,
);
network.add_edge(
"borrowing".to_string(),
"lifetimes".to_string(),
LinkType::Semantic,
0.85,
);
// Async ecosystem
network.add_edge("rust".to_string(), "async_rust".to_string(), LinkType::Semantic, 0.8);
network.add_edge("async_rust".to_string(), "tokio".to_string(), LinkType::Semantic, 0.9);
network.add_edge("tokio".to_string(), "runtime".to_string(), LinkType::PartOf, 0.85);
network.add_edge("async_rust".to_string(), "futures".to_string(), LinkType::Semantic, 0.85);
network.add_edge(
"rust".to_string(),
"async_rust".to_string(),
LinkType::Semantic,
0.8,
);
network.add_edge(
"async_rust".to_string(),
"tokio".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"tokio".to_string(),
"runtime".to_string(),
LinkType::PartOf,
0.85,
);
network.add_edge(
"async_rust".to_string(),
"futures".to_string(),
LinkType::Semantic,
0.85,
);
network
}
@ -52,10 +97,30 @@ fn create_chain_network() -> ActivationNetwork {
let mut network = ActivationNetwork::with_config(config);
// Create a chain: A -> B -> C -> D -> E
network.add_edge("node_a".to_string(), "node_b".to_string(), LinkType::Semantic, 0.9);
network.add_edge("node_b".to_string(), "node_c".to_string(), LinkType::Semantic, 0.9);
network.add_edge("node_c".to_string(), "node_d".to_string(), LinkType::Semantic, 0.9);
network.add_edge("node_d".to_string(), "node_e".to_string(), LinkType::Semantic, 0.9);
network.add_edge(
"node_a".to_string(),
"node_b".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"node_b".to_string(),
"node_c".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"node_c".to_string(),
"node_d".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"node_d".to_string(),
"node_e".to_string(),
LinkType::Semantic,
0.9,
);
network
}
@ -81,7 +146,10 @@ fn test_spreading_finds_hidden_chains() {
// Should find all nodes in the chain
let found_ids: HashSet<_> = results.iter().map(|r| r.memory_id.as_str()).collect();
assert!(found_ids.contains("node_b"), "Should find direct neighbor node_b");
assert!(
found_ids.contains("node_b"),
"Should find direct neighbor node_b"
);
assert!(found_ids.contains("node_c"), "Should find 2-hop node_c");
assert!(found_ids.contains("node_d"), "Should find 3-hop node_d");
assert!(found_ids.contains("node_e"), "Should find 4-hop node_e");
@ -127,9 +195,21 @@ fn test_activation_decays_with_distance() {
let results = network.activate("a", 1.0);
let act_b = results.iter().find(|r| r.memory_id == "b").map(|r| r.activation).unwrap_or(0.0);
let act_c = results.iter().find(|r| r.memory_id == "c").map(|r| r.activation).unwrap_or(0.0);
let act_d = results.iter().find(|r| r.memory_id == "d").map(|r| r.activation).unwrap_or(0.0);
let act_b = results
.iter()
.find(|r| r.memory_id == "b")
.map(|r| r.activation)
.unwrap_or(0.0);
let act_c = results
.iter()
.find(|r| r.memory_id == "c")
.map(|r| r.activation)
.unwrap_or(0.0);
let act_d = results
.iter()
.find(|r| r.memory_id == "d")
.map(|r| r.activation)
.unwrap_or(0.0);
// Verify monotonic decrease
assert!(act_b > act_c, "b ({:.3}) > c ({:.3})", act_b, act_c);
@ -159,7 +239,12 @@ fn test_edge_reinforcement_hebbian() {
let mut network = ActivationNetwork::new();
// Add edge with moderate strength
network.add_edge("concept_a".to_string(), "concept_b".to_string(), LinkType::Semantic, 0.5);
network.add_edge(
"concept_a".to_string(),
"concept_b".to_string(),
LinkType::Semantic,
0.5,
);
// Get initial associations
let initial = network.get_associations("concept_a");
@ -169,7 +254,10 @@ fn test_edge_reinforcement_hebbian() {
.map(|a| a.association_strength)
.unwrap_or(0.0);
assert!((initial_strength - 0.5).abs() < 0.01, "Initial should be 0.5");
assert!(
(initial_strength - 0.5).abs() < 0.01,
"Initial should be 0.5"
);
// Reinforce the connection
network.reinforce_edge("concept_a", "concept_b", 0.2);
@ -270,10 +358,30 @@ fn test_different_link_types_affect_activation() {
let mut network = ActivationNetwork::new();
// Add edges with different link types
network.add_edge("event".to_string(), "semantic_rel".to_string(), LinkType::Semantic, 0.9);
network.add_edge("event".to_string(), "temporal_rel".to_string(), LinkType::Temporal, 0.8);
network.add_edge("event".to_string(), "causal_rel".to_string(), LinkType::Causal, 0.85);
network.add_edge("event".to_string(), "part_of_rel".to_string(), LinkType::PartOf, 0.7);
network.add_edge(
"event".to_string(),
"semantic_rel".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"event".to_string(),
"temporal_rel".to_string(),
LinkType::Temporal,
0.8,
);
network.add_edge(
"event".to_string(),
"causal_rel".to_string(),
LinkType::Causal,
0.85,
);
network.add_edge(
"event".to_string(),
"part_of_rel".to_string(),
LinkType::PartOf,
0.7,
);
let results = network.activate("event", 1.0);
@ -285,10 +393,22 @@ fn test_different_link_types_affect_activation() {
assert!(found.contains("part_of_rel"));
// Verify link types are preserved
let semantic = results.iter().find(|r| r.memory_id == "semantic_rel").unwrap();
let temporal = results.iter().find(|r| r.memory_id == "temporal_rel").unwrap();
let causal = results.iter().find(|r| r.memory_id == "causal_rel").unwrap();
let part_of = results.iter().find(|r| r.memory_id == "part_of_rel").unwrap();
let semantic = results
.iter()
.find(|r| r.memory_id == "semantic_rel")
.unwrap();
let temporal = results
.iter()
.find(|r| r.memory_id == "temporal_rel")
.unwrap();
let causal = results
.iter()
.find(|r| r.memory_id == "causal_rel")
.unwrap();
let part_of = results
.iter()
.find(|r| r.memory_id == "part_of_rel")
.unwrap();
assert_eq!(semantic.link_type, LinkType::Semantic);
assert_eq!(temporal.link_type, LinkType::Temporal);
@ -339,9 +459,9 @@ fn test_max_hops_limit() {
#[test]
fn test_minimum_threshold() {
let config = ActivationConfig {
decay_factor: 0.5, // 50% decay per hop
max_hops: 10, // High limit
min_threshold: 0.2, // But high threshold
decay_factor: 0.5, // 50% decay per hop
max_hops: 10, // High limit
min_threshold: 0.2, // But high threshold
allow_cycles: false,
};
let mut network = ActivationNetwork::with_config(config);
@ -370,8 +490,18 @@ fn test_minimum_threshold() {
fn test_path_tracking() {
let mut network = ActivationNetwork::new();
network.add_edge("start".to_string(), "middle".to_string(), LinkType::Semantic, 0.9);
network.add_edge("middle".to_string(), "end".to_string(), LinkType::Semantic, 0.9);
network.add_edge(
"start".to_string(),
"middle".to_string(),
LinkType::Semantic,
0.9,
);
network.add_edge(
"middle".to_string(),
"end".to_string(),
LinkType::Semantic,
0.9,
);
let results = network.activate("start", 1.0);
@ -390,10 +520,30 @@ fn test_convergent_paths() {
let mut network = ActivationNetwork::new();
// Create convergent paths: source -> a -> target and source -> b -> target
network.add_edge("source".to_string(), "path_a".to_string(), LinkType::Semantic, 0.8);
network.add_edge("source".to_string(), "path_b".to_string(), LinkType::Semantic, 0.8);
network.add_edge("path_a".to_string(), "target".to_string(), LinkType::Semantic, 0.8);
network.add_edge("path_b".to_string(), "target".to_string(), LinkType::Semantic, 0.8);
network.add_edge(
"source".to_string(),
"path_a".to_string(),
LinkType::Semantic,
0.8,
);
network.add_edge(
"source".to_string(),
"path_b".to_string(),
LinkType::Semantic,
0.8,
);
network.add_edge(
"path_a".to_string(),
"target".to_string(),
LinkType::Semantic,
0.8,
);
network.add_edge(
"path_b".to_string(),
"target".to_string(),
LinkType::Semantic,
0.8,
);
let results = network.activate("source", 1.0);

View file

@ -25,9 +25,18 @@ fn test_jsonrpc_request_required_fields() {
"params": {}
});
assert_eq!(valid_request["jsonrpc"], "2.0", "jsonrpc version must be 2.0");
assert!(valid_request["method"].is_string(), "method must be a string");
assert!(valid_request["id"].is_number(), "id should be present for requests");
assert_eq!(
valid_request["jsonrpc"], "2.0",
"jsonrpc version must be 2.0"
);
assert!(
valid_request["method"].is_string(),
"method must be a string"
);
assert!(
valid_request["id"].is_number(),
"id should be present for requests"
);
}
/// Test that JSON-RPC notifications have no id field.
@ -40,7 +49,10 @@ fn test_jsonrpc_notification_has_no_id() {
"method": "notifications/initialized"
});
assert!(notification.get("id").is_none(), "Notifications must not have an id field");
assert!(
notification.get("id").is_none(),
"Notifications must not have an id field"
);
assert_eq!(notification["method"], "notifications/initialized");
}
@ -66,8 +78,14 @@ fn test_jsonrpc_success_response_format() {
});
assert_eq!(success_response["jsonrpc"], "2.0");
assert!(success_response["result"].is_object(), "Success response must have result");
assert!(success_response.get("error").is_none(), "Success response must not have error");
assert!(
success_response["result"].is_object(),
"Success response must have result"
);
assert!(
success_response.get("error").is_none(),
"Success response must not have error"
);
}
/// Test JSON-RPC response format for errors.
@ -89,10 +107,22 @@ fn test_jsonrpc_error_response_format() {
});
assert_eq!(error_response["jsonrpc"], "2.0");
assert!(error_response["error"].is_object(), "Error response must have error object");
assert!(error_response["error"]["code"].is_number(), "Error must have code");
assert!(error_response["error"]["message"].is_string(), "Error must have message");
assert!(error_response.get("result").is_none(), "Error response must not have result");
assert!(
error_response["error"].is_object(),
"Error response must have error object"
);
assert!(
error_response["error"]["code"].is_number(),
"Error must have code"
);
assert!(
error_response["error"]["message"].is_string(),
"Error must have message"
);
assert!(
error_response.get("result").is_none(),
"Error response must not have result"
);
}
// ============================================================================
@ -119,8 +149,12 @@ fn test_standard_jsonrpc_error_codes() {
for (code, message) in error_codes {
// All standard codes are in the reserved range
assert!((-32700..=-32600).contains(&code),
"Standard error code {} ({}) must be in reserved range", code, message);
assert!(
(-32700..=-32600).contains(&code),
"Standard error code {} ({}) must be in reserved range",
code,
message
);
}
}
@ -142,8 +176,12 @@ fn test_mcp_specific_error_codes() {
for (code, name) in mcp_error_codes {
// MCP-specific codes are in the server error range
assert!((-32099..=-32000).contains(&code),
"MCP error code {} ({}) must be in server error range", code, name);
assert!(
(-32099..=-32000).contains(&code),
"MCP error code {} ({}) must be in server error range",
code,
name
);
}
}
@ -177,11 +215,20 @@ fn test_mcp_initialize_request_format() {
});
let params = &init_request["params"];
assert!(params["protocolVersion"].is_string(), "protocolVersion required");
assert!(
params["protocolVersion"].is_string(),
"protocolVersion required"
);
assert!(params["capabilities"].is_object(), "capabilities required");
assert!(params["clientInfo"].is_object(), "clientInfo required");
assert!(params["clientInfo"]["name"].is_string(), "clientInfo.name required");
assert!(params["clientInfo"]["version"].is_string(), "clientInfo.version required");
assert!(
params["clientInfo"]["name"].is_string(),
"clientInfo.name required"
);
assert!(
params["clientInfo"]["version"].is_string(),
"clientInfo.version required"
);
}
/// Test MCP initialize response format.
@ -206,11 +253,26 @@ fn test_mcp_initialize_response_format() {
"instructions": "Vestige is your long-term memory system."
});
assert!(init_response["protocolVersion"].is_string(), "protocolVersion required");
assert!(init_response["serverInfo"].is_object(), "serverInfo required");
assert!(init_response["serverInfo"]["name"].is_string(), "serverInfo.name required");
assert!(init_response["serverInfo"]["version"].is_string(), "serverInfo.version required");
assert!(init_response["capabilities"].is_object(), "capabilities required");
assert!(
init_response["protocolVersion"].is_string(),
"protocolVersion required"
);
assert!(
init_response["serverInfo"].is_object(),
"serverInfo required"
);
assert!(
init_response["serverInfo"]["name"].is_string(),
"serverInfo.name required"
);
assert!(
init_response["serverInfo"]["version"].is_string(),
"serverInfo.version required"
);
assert!(
init_response["capabilities"].is_object(),
"capabilities required"
);
}
/// Test that requests before initialization are rejected.
@ -229,8 +291,10 @@ fn test_server_rejects_requests_before_initialize() {
}
});
assert_eq!(pre_init_error["error"]["code"], -32003,
"Pre-initialization requests should return ServerNotInitialized error");
assert_eq!(
pre_init_error["error"]["code"], -32003,
"Pre-initialization requests should return ServerNotInitialized error"
);
}
// ============================================================================
@ -277,9 +341,14 @@ fn test_tools_list_response_format() {
for tool in tools {
assert!(tool["name"].is_string(), "Tool must have name");
assert!(tool["inputSchema"].is_object(), "Tool must have inputSchema");
assert_eq!(tool["inputSchema"]["type"], "object",
"inputSchema must be an object type");
assert!(
tool["inputSchema"].is_object(),
"Tool must have inputSchema"
);
assert_eq!(
tool["inputSchema"]["type"], "object",
"inputSchema must be an object type"
);
}
}
@ -306,7 +375,10 @@ fn test_tools_call_request_format() {
let params = &tools_call_request["params"];
assert!(params["name"].is_string(), "Tool name required");
assert!(params["arguments"].is_object(), "Arguments should be an object");
assert!(
params["arguments"].is_object(),
"Arguments should be an object"
);
}
/// Test tools/call response format.
@ -328,8 +400,14 @@ fn test_tools_call_response_format() {
let content = tools_call_response["content"].as_array().unwrap();
assert!(!content.is_empty(), "Content array should not be empty");
assert!(content[0]["type"].is_string(), "Content item must have type");
assert!(content[0]["text"].is_string(), "Text content must have text field");
assert!(
content[0]["type"].is_string(),
"Content item must have type"
);
assert!(
content[0]["text"].is_string(),
"Text content must have text field"
);
}
// ============================================================================
@ -407,6 +485,8 @@ fn test_resources_read_response_format() {
assert!(!contents.is_empty(), "Contents should not be empty");
assert!(contents[0]["uri"].is_string(), "Content must have uri");
// Must have either text or blob
assert!(contents[0]["text"].is_string() || contents[0]["blob"].is_string(),
"Content must have text or blob");
assert!(
contents[0]["text"].is_string() || contents[0]["blob"].is_string(),
"Content must have text or blob"
);
}

View file

@ -3,7 +3,7 @@
//! Comprehensive tests for all MCP tools provided by Vestige.
//! Tests cover input validation, execution, and response formats.
use serde_json::{json, Value};
use serde_json::{Value, json};
// ============================================================================
// HELPER FUNCTIONS
@ -11,7 +11,10 @@ use serde_json::{json, Value};
/// Validate a tool call response structure
fn validate_tool_response(response: &Value) {
assert!(response["content"].is_array(), "Response must have content array");
assert!(
response["content"].is_array(),
"Response must have content array"
);
let content = response["content"].as_array().unwrap();
assert!(!content.is_empty(), "Content array must not be empty");
assert!(content[0]["type"].is_string(), "Content must have type");
@ -74,7 +77,10 @@ fn test_ingest_tool_rejects_empty_content() {
"isError": true
});
assert_eq!(expected_error["isError"], true, "Empty content should be an error");
assert_eq!(
expected_error["isError"], true,
"Empty content should be an error"
);
}
/// Test ingest tool with all optional fields.
@ -193,7 +199,10 @@ fn test_semantic_search_valid() {
validate_tool_response(&expected_response);
let parsed = parse_response_text(&expected_response);
assert_eq!(parsed["method"], "semantic", "Should indicate semantic search");
assert_eq!(
parsed["method"], "semantic",
"Should indicate semantic search"
);
}
/// Test semantic search handles embedding not ready.
@ -209,7 +218,10 @@ fn test_semantic_search_embedding_not_ready() {
});
let parsed = parse_response_text(&expected_response);
assert!(parsed["error"].is_string(), "Should explain embedding not ready");
assert!(
parsed["error"].is_string(),
"Should explain embedding not ready"
);
assert!(parsed["hint"].is_string(), "Should provide hint");
}
@ -337,7 +349,10 @@ fn test_mark_reviewed_with_rating() {
let parsed = parse_response_text(&expected_response);
assert_eq!(parsed["success"], true, "Review should succeed");
assert!(parsed["nextReview"].is_string(), "Should return next review date");
assert!(
parsed["nextReview"].is_string(),
"Should return next review date"
);
}
/// Test mark_reviewed with invalid rating.
@ -382,8 +397,14 @@ fn test_get_stats() {
validate_tool_response(&expected_response);
let parsed = parse_response_text(&expected_response);
assert!(parsed["totalNodes"].is_number(), "Should return total nodes");
assert!(parsed["averageRetention"].is_number(), "Should return average retention");
assert!(
parsed["totalNodes"].is_number(),
"Should return total nodes"
);
assert!(
parsed["averageRetention"].is_number(),
"Should return average retention"
);
}
/// Test health_check returns health status.
@ -431,7 +452,10 @@ fn test_set_intention_basic() {
let parsed = parse_response_text(&expected_response);
assert_eq!(parsed["success"], true, "Should succeed");
assert!(parsed["intentionId"].is_string(), "Should return intention ID");
assert!(
parsed["intentionId"].is_string(),
"Should return intention ID"
);
assert_eq!(parsed["priority"], 3, "High priority should be 3");
}
@ -477,8 +501,14 @@ fn test_check_intentions_with_context() {
});
let parsed = parse_response_text(&expected_response);
assert!(parsed["triggered"].is_array(), "Should return triggered intentions");
assert!(parsed["pending"].is_array(), "Should return pending intentions");
assert!(
parsed["triggered"].is_array(),
"Should return triggered intentions"
);
assert!(
parsed["pending"].is_array(),
"Should return pending intentions"
);
}
/// Test complete_intention marks as fulfilled.
@ -523,7 +553,10 @@ fn test_list_intentions_with_filter() {
});
let parsed = parse_response_text(&expected_response);
assert!(parsed["intentions"].is_array(), "Should return intentions array");
assert!(
parsed["intentions"].is_array(),
"Should return intentions array"
);
assert_eq!(parsed["status"], "active", "Should echo status filter");
}
@ -553,9 +586,18 @@ fn test_tool_schemas_are_valid_json_schema() {
"required": ["content"]
});
assert_eq!(ingest_schema["type"], "object", "Schema must be object type");
assert!(ingest_schema["properties"].is_object(), "Must have properties");
assert!(ingest_schema["required"].is_array(), "Must specify required fields");
assert_eq!(
ingest_schema["type"], "object",
"Schema must be object type"
);
assert!(
ingest_schema["properties"].is_object(),
"Must have properties"
);
assert!(
ingest_schema["required"].is_array(),
"Must specify required fields"
);
}
/// Test all tools have required inputSchema fields.
@ -575,7 +617,10 @@ fn test_all_tools_have_schema() {
];
for (tool_name, required_fields) in tool_definitions {
assert!(!required_fields.is_empty(),
"Tool {} should have at least one required field", tool_name);
assert!(
!required_fields.is_empty(),
"Tool {} should have at least one required field",
tool_name
);
}
}