feat(v2.0.5): Intentional Amnesia — active forgetting via top-down inhibitory control

First AI memory system to model forgetting as a neuroscience-grounded
PROCESS rather than passive decay. Adds the `suppress` MCP tool (#24),
Rac1 cascade worker, migration V10, and dashboard forgetting indicators.

Based on:
- Anderson, Hanslmayr & Quaegebeur (2025), Nat Rev Neurosci — right
  lateral PFC as the domain-general inhibitory controller; SIF
  compounds with each stopping attempt.
- Cervantes-Sandoval et al. (2020), Front Cell Neurosci PMC7477079 —
  Rac1 GTPase as the active synaptic destabilization mechanism.

What's new:
* `suppress` MCP tool — each call compounds `suppression_count` and
  subtracts a `0.15 × count` penalty (saturating at 80%) from
  retrieval scores during hybrid search. Distinct from delete
  (removes) and demote (one-shot).
* Rac1 cascade worker — background sweep piggybacks the 6h
  consolidation loop, walks `memory_connections` edges from
  recently-suppressed seeds, applies attenuated FSRS decay to
  co-activated neighbors. You don't just forget Jake — you fade
  the café, the roommate, the birthday.
* 24h labile window — reversible via `suppress({id, reverse: true})`
  within 24 hours. Matches Nader reconsolidation semantics.
* Migration V10 — additive-only (`suppression_count`, `suppressed_at`
  + partial indices). All v2.0.x DBs upgrade seamlessly on first launch.
* Dashboard: `ForgettingIndicator.svelte` pulses when suppressions
  are active. 3D graph nodes dim to 20% opacity when suppressed.
  New WebSocket events: `MemorySuppressed`, `MemoryUnsuppressed`,
  `Rac1CascadeSwept`. Heartbeat carries `suppressed_count`.
* Search pipeline: SIF penalty inserted into the accessibility stage
  so it stacks on top of passive FSRS decay.
* Tool count bumped 23 → 24. Cognitive modules 29 → 30.

Memories persist — they are INHIBITED, not erased. `memory.get(id)`
returns full content through any number of suppressions. The 24h
labile window is a grace period for regret.

Also fixes issue #31 (dashboard graph view buggy) as a companion UI
bug discovered during the v2.0.5 audit cycle:

* Root cause: node glow `SpriteMaterial` had no `map`, so
  `THREE.Sprite` rendered as a solid-coloured 1×1 plane. Additive
  blending + `UnrealBloomPass(0.8, 0.4, 0.85)` amplified the square
  edges into hard-edged glowing cubes.
* Fix: shared 128×128 radial-gradient `CanvasTexture` singleton used
  as the sprite map. Retuned bloom to `(0.55, 0.6, 0.2)`. Halved fog
  density (0.008 → 0.0035). Edges bumped from dark navy `0x4a4a7a`
  to brand violet `0x8b5cf6` with higher opacity. Added explicit
  `scene.background` and a 2000-point starfield for depth.
* 21 regression tests added in `ui-fixes.test.ts` locking every
  invariant in (shared texture singleton, depthWrite:false, scale
  ×6, bloom magic numbers via source regex, starfield presence).

Tests: 1,284 Rust (+47) + 171 Vitest (+21) = 1,455 total, 0 failed
Clippy: clean across all targets, zero warnings
Release binary: 22.6MB, `cargo build --release -p vestige-mcp` green
Versions: workspace aligned at 2.0.5 across all 6 crates/packages

Closes #31
This commit is contained in:
Sam Valladares 2026-04-14 17:30:30 -05:00
parent 95bde93b49
commit 8178beb961
359 changed files with 8277 additions and 3416 deletions

View file

@ -1,6 +1,6 @@
[package]
name = "vestige-core"
version = "2.0.4"
version = "2.0.5"
edition = "2024"
rust-version = "1.91"
authors = ["Vestige Team"]

View file

@ -3,10 +3,10 @@
//! Benchmarks for core search operations using Criterion.
//! Run with: cargo bench -p vestige-core
use criterion::{criterion_group, criterion_main, Criterion, black_box};
use vestige_core::search::hyde::{classify_intent, expand_query, centroid_embedding};
use vestige_core::search::{reciprocal_rank_fusion, linear_combination, sanitize_fts5_query};
use criterion::{Criterion, black_box, criterion_group, criterion_main};
use vestige_core::embeddings::cosine_similarity;
use vestige_core::search::hyde::{centroid_embedding, classify_intent, expand_query};
use vestige_core::search::{linear_combination, reciprocal_rank_fusion, sanitize_fts5_query};
fn bench_classify_intent(c: &mut Criterion) {
let queries = [
@ -29,7 +29,9 @@ fn bench_classify_intent(c: &mut Criterion) {
fn bench_expand_query(c: &mut Criterion) {
c.bench_function("expand_query", |b| {
b.iter(|| {
black_box(expand_query("What is spaced repetition and how does FSRS work?"));
black_box(expand_query(
"What is spaced repetition and how does FSRS work?",
));
})
});
}
@ -37,11 +39,7 @@ fn bench_expand_query(c: &mut Criterion) {
fn bench_centroid_embedding(c: &mut Criterion) {
// Simulate 4 embeddings of 256 dimensions
let embeddings: Vec<Vec<f32>> = (0..4)
.map(|i| {
(0..256)
.map(|j| ((i * 256 + j) as f32).sin())
.collect()
})
.map(|i| (0..256).map(|j| ((i * 256 + j) as f32).sin()).collect())
.collect();
c.bench_function("centroid_256d_4vecs", |b| {
@ -61,7 +59,11 @@ fn bench_rrf_fusion(c: &mut Criterion) {
c.bench_function("rrf_50x50", |b| {
b.iter(|| {
black_box(reciprocal_rank_fusion(&keyword_results, &semantic_results, 60.0));
black_box(reciprocal_rank_fusion(
&keyword_results,
&semantic_results,
60.0,
));
})
});
}
@ -76,7 +78,12 @@ fn bench_linear_combination(c: &mut Criterion) {
c.bench_function("linear_combo_50x50", |b| {
b.iter(|| {
black_box(linear_combination(&keyword_results, &semantic_results, 0.3, 0.7));
black_box(linear_combination(
&keyword_results,
&semantic_results,
0.3,
0.7,
));
})
});
}
@ -84,7 +91,9 @@ fn bench_linear_combination(c: &mut Criterion) {
fn bench_sanitize_fts5(c: &mut Criterion) {
c.bench_function("sanitize_fts5_query", |b| {
b.iter(|| {
black_box(sanitize_fts5_query("hello world \"exact phrase\" OR special-chars!@#"));
black_box(sanitize_fts5_query(
"hello world \"exact phrase\" OR special-chars!@#",
));
})
});
}

View file

@ -766,6 +766,6 @@ fn main() -> Result<(), std::io::Error> {
embedder.embed_auto("Another text sample.");
let stats = embedder.stats();
assert!(stats.len() > 0);
assert!(!stats.is_empty());
}
}

View file

@ -300,7 +300,11 @@ impl MemoryChainBuilder {
}
// Sort by score (descending)
all_paths.sort_by(|a, b| b.score.partial_cmp(&a.score).unwrap_or(std::cmp::Ordering::Equal));
all_paths.sort_by(|a, b| {
b.score
.partial_cmp(&a.score)
.unwrap_or(std::cmp::Ordering::Equal)
});
// Return top paths
all_paths.into_iter().take(10).collect()

View file

@ -445,7 +445,11 @@ impl MemoryCompressor {
}
// Sort by importance and deduplicate
facts.sort_by(|a, b| b.importance.partial_cmp(&a.importance).unwrap_or(std::cmp::Ordering::Equal));
facts.sort_by(|a, b| {
b.importance
.partial_cmp(&a.importance)
.unwrap_or(std::cmp::Ordering::Equal)
});
self.deduplicate_facts(facts)
}

View file

@ -432,10 +432,11 @@ impl CrossProjectLearner {
// Check each trigger
for trigger in &pattern.pattern.triggers {
if let Some((matches, reason)) = self.check_trigger(trigger, context)
&& matches {
match_scores.push(trigger.confidence);
match_reasons.push(reason);
}
&& matches
{
match_scores.push(trigger.confidence);
match_reasons.push(reason);
}
}
if match_scores.is_empty() {
@ -547,10 +548,11 @@ impl CrossProjectLearner {
let success_rate = success_count as f64 / total_count as f64;
if let Ok(mut patterns) = self.patterns.write()
&& let Some(pattern) = patterns.get_mut(pattern_id) {
pattern.success_rate = success_rate;
pattern.application_count = total_count as u32;
}
&& let Some(pattern) = patterns.get_mut(pattern_id)
{
pattern.success_rate = success_rate;
pattern.application_count = total_count as u32;
}
}
fn extract_patterns_from_category(
@ -595,38 +597,39 @@ impl CrossProjectLearner {
let pattern_id = format!("auto-{}-{}", category_to_string(&category), keyword);
if let Ok(mut patterns) = self.patterns.write()
&& !patterns.contains_key(&pattern_id) {
patterns.insert(
pattern_id.clone(),
UniversalPattern {
id: pattern_id,
pattern: CodePattern {
name: format!("{} pattern", keyword),
category: category.clone(),
description: format!(
"Pattern involving '{}' observed in {} projects",
keyword,
projects.len()
),
example: None,
triggers: vec![PatternTrigger {
trigger_type: TriggerType::Topic,
value: keyword.clone(),
confidence: 0.5,
}],
benefits: vec![],
considerations: vec![],
},
projects_seen_in: projects.iter().map(|s| s.to_string()).collect(),
success_rate: 0.5, // Default until validated
applicability: format!("When working with {}", keyword),
confidence: 0.5,
first_seen: Utc::now(),
last_seen: Utc::now(),
application_count: 0,
&& !patterns.contains_key(&pattern_id)
{
patterns.insert(
pattern_id.clone(),
UniversalPattern {
id: pattern_id,
pattern: CodePattern {
name: format!("{} pattern", keyword),
category: category.clone(),
description: format!(
"Pattern involving '{}' observed in {} projects",
keyword,
projects.len()
),
example: None,
triggers: vec![PatternTrigger {
trigger_type: TriggerType::Topic,
value: keyword.clone(),
confidence: 0.5,
}],
benefits: vec![],
considerations: vec![],
},
);
}
projects_seen_in: projects.iter().map(|s| s.to_string()).collect(),
success_rate: 0.5, // Default until validated
applicability: format!("When working with {}", keyword),
confidence: 0.5,
first_seen: Utc::now(),
last_seen: Utc::now(),
application_count: 0,
},
);
}
}
}
}

View file

@ -455,9 +455,10 @@ impl ConsolidationScheduler {
// Strengthen connections between sequentially replayed memories
for window in replay.sequence.windows(2) {
if let [id_a, id_b] = window
&& graph.strengthen_connection(id_a, id_b, 0.1) {
strengthened += 1;
}
&& graph.strengthen_connection(id_a, id_b, 0.1)
{
strengthened += 1;
}
}
// Also strengthen based on discovered patterns
@ -704,11 +705,12 @@ impl ConnectionGraph {
for (a, b) in [(from_id, to_id), (to_id, from_id)] {
if let Some(connections) = self.connections.get_mut(a)
&& let Some(conn) = connections.iter_mut().find(|c| c.target_id == b) {
conn.strength = (conn.strength + boost).min(2.0);
conn.last_strengthened = now;
strengthened = true;
}
&& let Some(conn) = connections.iter_mut().find(|c| c.target_id == b)
{
conn.strength = (conn.strength + boost).min(2.0);
conn.last_strengthened = now;
strengthened = true;
}
}
strengthened
@ -1481,9 +1483,10 @@ impl MemoryDreamer {
// Try to generate insight from this cluster
if let Some(insight) = self.generate_insight_from_cluster(&cluster_memories)
&& insight.novelty_score >= self.config.min_novelty {
insights.push(insight);
}
&& insight.novelty_score >= self.config.min_novelty
{
insights.push(insight);
}
if insights.len() >= self.config.max_insights {
break;

View file

@ -232,9 +232,10 @@ impl ImportanceTracker {
// Store context with event
if let Ok(mut events) = self.recent_events.write()
&& let Some(event) = events.last_mut()
&& event.memory_id == memory_id {
event.context = Some(context.to_string());
}
&& event.memory_id == memory_id
{
event.context = Some(context.to_string());
}
}
/// Apply importance decay to all memories
@ -339,7 +340,11 @@ impl ImportanceTracker {
/// Get memories sorted by importance
pub fn get_top_by_importance(&self, limit: usize) -> Vec<ImportanceScore> {
let mut scores = self.get_all_scores();
scores.sort_by(|a, b| b.final_score.partial_cmp(&a.final_score).unwrap_or(std::cmp::Ordering::Equal));
scores.sort_by(|a, b| {
b.final_score
.partial_cmp(&a.final_score)
.unwrap_or(std::cmp::Ordering::Equal)
});
scores.truncate(limit);
scores
}
@ -355,7 +360,9 @@ impl ImportanceTracker {
scores.sort_by(|a, b| {
let a_neglect = a.base_importance - a.usage_importance;
let b_neglect = b.base_importance - b.usage_importance;
b_neglect.partial_cmp(&a_neglect).unwrap_or(std::cmp::Ordering::Equal)
b_neglect
.partial_cmp(&a_neglect)
.unwrap_or(std::cmp::Ordering::Equal)
});
scores.truncate(limit);
@ -446,7 +453,10 @@ mod tests {
assert_eq!(score.retrieval_count, 3);
assert_eq!(score.helpful_count, 3);
// 0.1 * 1.15^3 = ~0.152, so should be > initial 0.1
assert!(score.usage_importance > 0.1, "Should be boosted from baseline");
assert!(
score.usage_importance > 0.1,
"Should be boosted from baseline"
);
}
#[test]

View file

@ -562,9 +562,10 @@ impl IntentDetector {
}
ActionType::FileOpened | ActionType::FileEdited => {
if let Some(file) = &action.file
&& let Some(name) = file.file_name() {
suspected_area = name.to_string_lossy().to_string();
}
&& let Some(name) = file.file_name()
{
suspected_area = name.to_string_lossy().to_string();
}
}
_ => {}
}

View file

@ -29,7 +29,10 @@ pub mod speculative;
// Re-exports for convenient access
pub use adaptive_embedding::{AdaptiveEmbedder, ContentType, EmbeddingStrategy, Language};
pub use chains::{ChainStep, Connection, ConnectionType, MemoryChainBuilder, MemoryNode, MemoryPath, ReasoningChain};
pub use chains::{
ChainStep, Connection, ConnectionType, MemoryChainBuilder, MemoryNode, MemoryPath,
ReasoningChain,
};
pub use compression::{CompressedMemory, CompressionConfig, CompressionStats, MemoryCompressor};
pub use cross_project::{
ApplicableKnowledge, CrossProjectLearner, ProjectContext, UniversalPattern,
@ -58,14 +61,14 @@ pub use dreams::{
};
pub use importance::{ImportanceDecayConfig, ImportanceScore, ImportanceTracker, UsageEvent};
pub use intent::{ActionType, DetectedIntent, IntentDetector, MaintenanceType, UserAction};
pub use reconsolidation::{
AccessContext, AccessTrigger, AppliedModification, ChangeSummary, LabileState, MemorySnapshot,
Modification, ReconsolidatedMemory, ReconsolidationManager, ReconsolidationStats,
RelationshipType, RetrievalRecord,
};
pub use prediction_error::{
CandidateMemory, CreateReason, EvaluationIntent, GateDecision, GateStats, MergeStrategy,
PredictionErrorConfig, PredictionErrorGate, SimilarityResult, SupersedeReason, UpdateType,
cosine_similarity,
};
pub use reconsolidation::{
AccessContext, AccessTrigger, AppliedModification, ChangeSummary, LabileState, MemorySnapshot,
Modification, ReconsolidatedMemory, ReconsolidationManager, ReconsolidationStats,
RelationshipType, RetrievalRecord,
};
pub use speculative::{PredictedMemory, PredictionContext, SpeculativeRetriever, UsagePattern};

View file

@ -123,9 +123,15 @@ impl GateDecision {
/// Get the prediction error score
pub fn prediction_error(&self) -> f32 {
match self {
Self::Create { prediction_error, .. } => *prediction_error,
Self::Update { prediction_error, .. } => *prediction_error,
Self::Supersede { prediction_error, .. } => *prediction_error,
Self::Create {
prediction_error, ..
} => *prediction_error,
Self::Update {
prediction_error, ..
} => *prediction_error,
Self::Supersede {
prediction_error, ..
} => *prediction_error,
Self::Merge { avg_similarity, .. } => 1.0 - avg_similarity,
}
}
@ -368,7 +374,11 @@ impl PredictionErrorGate {
.collect();
// Sort by similarity (highest first)
similarities.sort_by(|a, b| b.similarity.partial_cmp(&a.similarity).unwrap_or(std::cmp::Ordering::Equal));
similarities.sort_by(|a, b| {
b.similarity
.partial_cmp(&a.similarity)
.unwrap_or(std::cmp::Ordering::Equal)
});
// Take top candidates
let top_candidates: Vec<_> = similarities
@ -394,8 +404,9 @@ impl PredictionErrorGate {
if let Some(c) = candidate {
// If similar and the existing memory was demoted, supersede it
if best.similarity >= self.config.similarity_threshold
&& c.was_demoted
&& self.config.auto_supersede_demoted {
&& c.was_demoted
&& self.config.auto_supersede_demoted
{
self.stats.supersedes += 1;
return GateDecision::Supersede {
old_memory_id: c.id.clone(),
@ -406,8 +417,8 @@ impl PredictionErrorGate {
}
// Check for correction (similar but contradictory)
if best.similarity >= self.config.correction_threshold
&& best.appears_contradictory {
if best.similarity >= self.config.correction_threshold && best.appears_contradictory
{
self.stats.supersedes += 1;
return GateDecision::Supersede {
old_memory_id: c.id.clone(),
@ -418,7 +429,8 @@ impl PredictionErrorGate {
}
// Regular update for similar content
if best.similarity >= self.config.similarity_threshold && self.config.prefer_updates {
if best.similarity >= self.config.similarity_threshold && self.config.prefer_updates
{
self.stats.updates += 1;
return GateDecision::Update {
target_id: best.memory_id.clone(),
@ -442,7 +454,10 @@ impl PredictionErrorGate {
self.stats.merges += 1;
return GateDecision::Merge {
memory_ids: merge_candidates.iter().map(|s| s.memory_id.clone()).collect(),
memory_ids: merge_candidates
.iter()
.map(|s| s.memory_id.clone())
.collect(),
avg_similarity,
strategy: MergeStrategy::Combine,
};
@ -501,7 +516,10 @@ impl PredictionErrorGate {
self.evaluate(new_content, new_embedding, candidates)
}
}
EvaluationIntent::Supersede { old_memory_id, reason } => {
EvaluationIntent::Supersede {
old_memory_id,
reason,
} => {
if let Some(c) = candidates.iter().find(|c| c.id == old_memory_id) {
let similarity = cosine_similarity(new_embedding, &c.embedding);
self.stats.supersedes += 1;
@ -515,9 +533,7 @@ impl PredictionErrorGate {
self.evaluate(new_content, new_embedding, candidates)
}
}
EvaluationIntent::Auto => {
self.evaluate(new_content, new_embedding, candidates)
}
EvaluationIntent::Auto => self.evaluate(new_content, new_embedding, candidates),
}
}
@ -596,7 +612,10 @@ pub enum EvaluationIntent {
/// Force update of specific memory
ForceUpdate { target_id: String },
/// Force supersede of specific memory
Supersede { old_memory_id: String, reason: SupersedeReason },
Supersede {
old_memory_id: String,
reason: SupersedeReason,
},
}
// ============================================================================
@ -680,18 +699,22 @@ mod tests {
// Create embeddings with controlled similarity based on seed
// Seeds close to each other = similar vectors
// Seeds far apart = different vectors
(0..384).map(|i| {
let base = (i as f32 / 384.0) * std::f32::consts::PI * 2.0;
(base * seed).sin()
}).collect()
(0..384)
.map(|i| {
let base = (i as f32 / 384.0) * std::f32::consts::PI * 2.0;
(base * seed).sin()
})
.collect()
}
fn make_orthogonal_embedding() -> Vec<f32> {
// Create an embedding that's orthogonal to seed=1.0
(0..384).map(|i| {
let base = (i as f32 / 384.0) * std::f32::consts::PI * 2.0;
(base + std::f32::consts::PI / 2.0).sin() // 90 degree phase shift
}).collect()
(0..384)
.map(|i| {
let base = (i as f32 / 384.0) * std::f32::consts::PI * 2.0;
(base + std::f32::consts::PI / 2.0).sin() // 90 degree phase shift
})
.collect()
}
fn make_candidate(id: &str, seed: f32) -> CandidateMemory {
@ -728,7 +751,13 @@ mod tests {
let decision = gate.evaluate("New content", &embedding, &[]);
assert!(matches!(decision, GateDecision::Create { reason: CreateReason::FirstMemory, .. }));
assert!(matches!(
decision,
GateDecision::Create {
reason: CreateReason::FirstMemory,
..
}
));
}
#[test]
@ -762,7 +791,10 @@ mod tests {
// Should supersede the demoted memory if similarity is above threshold
// If not superseding, it should at least update
assert!(matches!(decision, GateDecision::Supersede { .. } | GateDecision::Update { .. }));
assert!(matches!(
decision,
GateDecision::Supersede { .. } | GateDecision::Update { .. }
));
}
#[test]
@ -812,7 +844,13 @@ mod tests {
EvaluationIntent::ForceCreate,
);
assert!(matches!(decision, GateDecision::Create { reason: CreateReason::ExplicitCreate, .. }));
assert!(matches!(
decision,
GateDecision::Create {
reason: CreateReason::ExplicitCreate,
..
}
));
}
#[test]
@ -825,7 +863,9 @@ mod tests {
"Updated content",
&embedding,
&[candidate],
EvaluationIntent::ForceUpdate { target_id: "mem-1".to_string() },
EvaluationIntent::ForceUpdate {
target_id: "mem-1".to_string(),
},
);
assert!(matches!(decision, GateDecision::Update { .. }));

View file

@ -517,13 +517,14 @@ impl ReconsolidationManager {
}
if let Some(state) = self.labile_memories.get_mut(memory_id)
&& state.is_within_window(self.labile_window) {
let success = state.add_modification(modification);
if success {
self.stats.total_modifications += 1;
}
return success;
&& state.is_within_window(self.labile_window)
{
let success = state.add_modification(modification);
if success {
self.stats.total_modifications += 1;
}
return success;
}
false
}
@ -690,13 +691,14 @@ impl ReconsolidationManager {
if let Ok(history) = self.retrieval_history.read() {
for record in history.iter() {
if record.memory_id == memory_id
&& let Some(context) = &record.context {
for co_id in &context.co_retrieved {
if co_id != memory_id {
*co_retrieved.entry(co_id.clone()).or_insert(0) += 1;
}
&& let Some(context) = &record.context
{
for co_id in &context.co_retrieved {
if co_id != memory_id {
*co_retrieved.entry(co_id.clone()).or_insert(0) += 1;
}
}
}
}
}
@ -921,7 +923,7 @@ mod tests {
#[test]
fn test_relationship_types() {
let relationships = vec![
let relationships = [
RelationshipType::Supports,
RelationshipType::Contradicts,
RelationshipType::Elaborates,

View file

@ -193,7 +193,11 @@ impl SpeculativeRetriever {
// Deduplicate and sort by confidence
predictions = self.deduplicate_predictions(predictions);
predictions.sort_by(|a, b| b.confidence.partial_cmp(&a.confidence).unwrap_or(std::cmp::Ordering::Equal));
predictions.sort_by(|a, b| {
b.confidence
.partial_cmp(&a.confidence)
.unwrap_or(std::cmp::Ordering::Equal)
});
predictions.truncate(MAX_PREDICTIONS);
// Filter by minimum confidence
@ -266,11 +270,12 @@ impl SpeculativeRetriever {
// Update file-memory associations
if let Some(file) = file_context
&& let Ok(mut map) = self.file_memory_map.write() {
map.entry(file.to_string())
.or_insert_with(Vec::new)
.push(memory_id.to_string());
}
&& let Ok(mut map) = self.file_memory_map.write()
{
map.entry(file.to_string())
.or_insert_with(Vec::new)
.push(memory_id.to_string());
}
}
/// Get cached predictions

View file

@ -587,9 +587,10 @@ impl ContextCapture {
// Java Spring
if let Ok(content) = fs::read_to_string(self.project_root.join("pom.xml"))
&& content.contains("spring") {
frameworks.push(Framework::Spring);
}
&& content.contains("spring")
{
frameworks.push(Framework::Spring);
}
// Ruby Rails
if self.file_exists("config/routes.rb") {
@ -613,36 +614,40 @@ impl ContextCapture {
fn detect_project_name(&self) -> Result<Option<String>> {
// Try Cargo.toml
if let Ok(content) = fs::read_to_string(self.project_root.join("Cargo.toml"))
&& let Some(name) = self.extract_toml_value(&content, "name") {
return Ok(Some(name));
}
&& let Some(name) = self.extract_toml_value(&content, "name")
{
return Ok(Some(name));
}
// Try package.json
if let Ok(content) = fs::read_to_string(self.project_root.join("package.json"))
&& let Some(name) = self.extract_json_value(&content, "name") {
return Ok(Some(name));
}
&& let Some(name) = self.extract_json_value(&content, "name")
{
return Ok(Some(name));
}
// Try pyproject.toml
if let Ok(content) = fs::read_to_string(self.project_root.join("pyproject.toml"))
&& let Some(name) = self.extract_toml_value(&content, "name") {
return Ok(Some(name));
}
&& let Some(name) = self.extract_toml_value(&content, "name")
{
return Ok(Some(name));
}
// Try go.mod
if let Ok(content) = fs::read_to_string(self.project_root.join("go.mod"))
&& let Some(line) = content.lines().next()
&& line.starts_with("module ") {
let name = line
.trim_start_matches("module ")
.split('/')
.next_back()
.unwrap_or("")
.to_string();
if !name.is_empty() {
return Ok(Some(name));
}
}
&& line.starts_with("module ")
{
let name = line
.trim_start_matches("module ")
.split('/')
.next_back()
.unwrap_or("")
.to_string();
if !name.is_empty() {
return Ok(Some(name));
}
}
// Fall back to directory name
Ok(self
@ -729,17 +734,18 @@ impl ContextCapture {
for test_dir in test_dirs {
let test_path = self.project_root.join(test_dir);
if test_path.exists()
&& let Ok(entries) = fs::read_dir(&test_path) {
for entry in entries.filter_map(|e| e.ok()) {
let entry_path = entry.path();
if let Some(entry_stem) = entry_path.file_stem() {
let entry_stem = entry_stem.to_string_lossy();
if entry_stem.contains(&stem) {
related.push(entry_path);
}
&& let Ok(entries) = fs::read_dir(&test_path)
{
for entry in entries.filter_map(|e| e.ok()) {
let entry_path = entry.path();
if let Some(entry_stem) = entry_path.file_stem() {
let entry_stem = entry_stem.to_string_lossy();
if entry_stem.contains(&stem) {
related.push(entry_path);
}
}
}
}
}
// For Rust, look for mod.rs in same directory
@ -794,38 +800,40 @@ impl ContextCapture {
// For Rust, use the parent directory name relative to src/
if path.extension().map(|e| e == "rs").unwrap_or(false)
&& let Ok(relative) = path.strip_prefix(&self.project_root)
&& let Ok(src_relative) = relative.strip_prefix("src") {
// Get the module path
let components: Vec<_> = src_relative
.parent()?
.components()
.map(|c| c.as_os_str().to_string_lossy().to_string())
.collect();
&& let Ok(src_relative) = relative.strip_prefix("src")
{
// Get the module path
let components: Vec<_> = src_relative
.parent()?
.components()
.map(|c| c.as_os_str().to_string_lossy().to_string())
.collect();
if !components.is_empty() {
return Some(components.join("::"));
}
}
if !components.is_empty() {
return Some(components.join("::"));
}
}
// For TypeScript/JavaScript, use the parent directory
if path
.extension()
.map(|e| e == "ts" || e == "tsx" || e == "js" || e == "jsx")
.unwrap_or(false)
&& let Ok(relative) = path.strip_prefix(&self.project_root) {
// Skip src/ or lib/ prefix
let relative = relative
.strip_prefix("src")
.or_else(|_| relative.strip_prefix("lib"))
.unwrap_or(relative);
&& let Ok(relative) = path.strip_prefix(&self.project_root)
{
// Skip src/ or lib/ prefix
let relative = relative
.strip_prefix("src")
.or_else(|_| relative.strip_prefix("lib"))
.unwrap_or(relative);
if let Some(parent) = relative.parent() {
let module = parent.to_string_lossy().replace('/', ".");
if !module.is_empty() {
return Some(module);
}
if let Some(parent) = relative.parent() {
let module = parent.to_string_lossy().replace('/', ".");
if !module.is_empty() {
return Some(module);
}
}
}
None
}
@ -865,10 +873,11 @@ impl ContextCapture {
let trimmed = line.trim();
if (trimmed.starts_with(&format!("{} ", key))
|| trimmed.starts_with(&format!("{}=", key)))
&& let Some(value) = trimmed.split('=').nth(1) {
let value = value.trim().trim_matches('"').trim_matches('\'');
return Some(value.to_string());
}
&& let Some(value) = trimmed.split('=').nth(1)
{
let value = value.trim().trim_matches('"').trim_matches('\'');
return Some(value.to_string());
}
}
None
}

View file

@ -275,9 +275,10 @@ impl GitAnalyzer {
files.push(path.to_path_buf());
}
if let Some(path) = delta.old_file().path()
&& !files.contains(&path.to_path_buf()) {
files.push(path.to_path_buf());
}
&& !files.contains(&path.to_path_buf())
{
files.push(path.to_path_buf());
}
}
}
@ -408,7 +409,11 @@ impl GitAnalyzer {
}
// Sort by strength
relationships.sort_by(|a, b| b.strength.partial_cmp(&a.strength).unwrap_or(std::cmp::Ordering::Equal));
relationships.sort_by(|a, b| {
b.strength
.partial_cmp(&a.strength)
.unwrap_or(std::cmp::Ordering::Equal)
});
Ok(relationships)
}
@ -492,9 +497,10 @@ impl GitAnalyzer {
.unwrap_or_else(Utc::now);
if let Some(since_time) = since
&& commit_time < since_time {
continue;
}
&& commit_time < since_time
{
continue;
}
let message = commit.message().map(|m| m.to_string()).unwrap_or_default();
@ -541,7 +547,12 @@ impl GitAnalyzer {
let symptom = if let Some(colon_byte_pos) = first_line.find(':') {
// Convert byte position to char position for safe slicing
let colon_char_pos = first_line[..colon_byte_pos].chars().count();
first_line.chars().skip(colon_char_pos + 1).collect::<String>().trim().to_string()
first_line
.chars()
.skip(colon_char_pos + 1)
.collect::<String>()
.trim()
.to_string()
} else {
first_line.to_string()
};

View file

@ -210,18 +210,23 @@ impl PatternDetector {
for pattern in relevant_patterns {
if let Some(confidence) = self.calculate_match_confidence(code, &code_lower, pattern)
&& confidence >= 0.3 {
matches.push(PatternMatch {
pattern: pattern.clone(),
confidence,
location: None, // Would need line-level analysis
suggestions: self.generate_suggestions(pattern, code),
});
}
&& confidence >= 0.3
{
matches.push(PatternMatch {
pattern: pattern.clone(),
confidence,
location: None, // Would need line-level analysis
suggestions: self.generate_suggestions(pattern, code),
});
}
}
// Sort by confidence
matches.sort_by(|a, b| b.confidence.partial_cmp(&a.confidence).unwrap_or(std::cmp::Ordering::Equal));
matches.sort_by(|a, b| {
b.confidence
.partial_cmp(&a.confidence)
.unwrap_or(std::cmp::Ordering::Equal)
});
Ok(matches)
}
@ -325,7 +330,11 @@ impl PatternDetector {
}
// Sort by relevance
suggestions.sort_by(|a, b| b.relevance.partial_cmp(&a.relevance).unwrap_or(std::cmp::Ordering::Equal));
suggestions.sort_by(|a, b| {
b.relevance
.partial_cmp(&a.relevance)
.unwrap_or(std::cmp::Ordering::Equal)
});
Ok(suggestions)
}

View file

@ -630,9 +630,7 @@ mod tests {
let related = tracker.get_related_files(Path::new("src/main.rs")).unwrap();
assert!(!related.is_empty());
assert!(related
.iter()
.any(|r| r.path == PathBuf::from("src/lib.rs")));
assert!(related.iter().any(|r| r.path == Path::new("src/lib.rs")));
}
#[test]

View file

@ -221,7 +221,6 @@ pub enum DecisionStatus {
Deprecated,
}
// ============================================================================
// BUG FIX
// ============================================================================
@ -273,7 +272,6 @@ pub enum BugSeverity {
Trivial,
}
// ============================================================================
// CODE PATTERN
// ============================================================================

View file

@ -10,13 +10,13 @@
use std::collections::HashSet;
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
use std::time::Duration;
use chrono::{DateTime, Utc};
use notify::{Config, Event, EventKind, RecommendedWatcher, RecursiveMode, Watcher};
use tokio::sync::{broadcast, mpsc, RwLock};
use tokio::sync::{RwLock, broadcast, mpsc};
use super::patterns::PatternDetector;
use super::relationships::RelationshipTracker;
@ -576,11 +576,12 @@ impl ManualEventHandler {
// Detect patterns
if self.config.detect_patterns
&& let Ok(content) = std::fs::read_to_string(path) {
let language = CodebaseWatcher::detect_language(path);
let detector = self.detector.read().await;
let _ = detector.detect_patterns(&content, &language);
}
&& let Ok(content) = std::fs::read_to_string(path)
{
let language = CodebaseWatcher::detect_language(path);
let detector = self.detector.read().await;
let _ = detector.detect_patterns(&content, &language);
}
Ok(())
}

View file

@ -7,12 +7,11 @@
//! - Prune very weak memories (optional)
//! - 4-Phase biologically-accurate dream cycle (v2.0)
mod sleep;
pub mod phases;
mod sleep;
pub use sleep::SleepConsolidation;
pub use phases::{
DreamEngine, DreamPhase, FourPhaseDreamResult, PhaseResult,
TriagedMemory, TriageCategory, CreativeConnection, CreativeConnectionType,
DreamInsight,
CreativeConnection, CreativeConnectionType, DreamEngine, DreamInsight, DreamPhase,
FourPhaseDreamResult, PhaseResult, TriageCategory, TriagedMemory,
};
pub use sleep::SleepConsolidation;

View file

@ -18,7 +18,7 @@ use std::time::Instant;
use chrono::{DateTime, Utc};
use crate::memory::KnowledgeNode;
use crate::neuroscience::emotional_memory::{EmotionalMemory, EmotionCategory};
use crate::neuroscience::emotional_memory::{EmotionCategory, EmotionalMemory};
use crate::neuroscience::importance_signals::ImportanceSignals;
use crate::neuroscience::synaptic_tagging::SynapticTaggingSystem;
@ -197,13 +197,11 @@ impl DreamEngine {
phases.push(phase2);
// ==================== PHASE 3: REM (Creative) ====================
let (connections, emotional_processed, phase3) =
self.phase_rem(&triaged, emotional_memory);
let (connections, emotional_processed, phase3) = self.phase_rem(&triaged, emotional_memory);
phases.push(phase3);
// ==================== PHASE 4: Integration ====================
let (insights, phase4) =
self.phase_integration(&connections, &triaged);
let (insights, phase4) = self.phase_integration(&connections, &triaged);
phases.push(phase4);
FourPhaseDreamResult {
@ -262,26 +260,31 @@ impl DreamEngine {
}
// Sort by importance (highest first)
triaged.sort_by(|a, b| b.importance.partial_cmp(&a.importance).unwrap_or(std::cmp::Ordering::Equal));
triaged.sort_by(|a, b| {
b.importance
.partial_cmp(&a.importance)
.unwrap_or(std::cmp::Ordering::Equal)
});
// Build replay queue: 70% high-value, 30% random noise floor
let high_value_count = (triaged.len() as f64 * self.high_value_ratio).ceil() as usize;
let random_count = triaged.len().saturating_sub(high_value_count);
let mut replay_queue: Vec<String> = triaged.iter()
let mut replay_queue: Vec<String> = triaged
.iter()
.take(high_value_count)
.map(|m| m.id.clone())
.collect();
// Add random noise floor from the remaining memories
if random_count > 0 {
let remaining: Vec<&TriagedMemory> = triaged.iter()
.skip(high_value_count)
.collect();
let remaining: Vec<&TriagedMemory> = triaged.iter().skip(high_value_count).collect();
// Simple deterministic shuffle using content hash
let mut noise: Vec<&TriagedMemory> = remaining;
noise.sort_by_key(|m| {
let hash: u64 = m.id.bytes().fold(0u64, |acc, b| acc.wrapping_mul(31).wrapping_add(b as u64));
let hash: u64 =
m.id.bytes()
.fold(0u64, |acc, b| acc.wrapping_mul(31).wrapping_add(b as u64));
hash
});
for m in noise.iter().take(random_count) {
@ -307,7 +310,9 @@ impl DreamEngine {
actions.push(format!(
"Replay queue: {} high-value + {} noise = {} total",
high_value_count.min(triaged.len()),
replay_queue.len().saturating_sub(high_value_count.min(triaged.len())),
replay_queue
.len()
.saturating_sub(high_value_count.min(triaged.len())),
replay_queue.len()
));
@ -333,16 +338,25 @@ impl DreamEngine {
emotion: &EmotionCategory,
) -> TriageCategory {
// High emotional content
if matches!(emotion, EmotionCategory::Frustration | EmotionCategory::Urgency | EmotionCategory::Joy | EmotionCategory::Surprise)
&& node.sentiment_magnitude > 0.4 {
return TriageCategory::Emotional;
}
if matches!(
emotion,
EmotionCategory::Frustration
| EmotionCategory::Urgency
| EmotionCategory::Joy
| EmotionCategory::Surprise
) && node.sentiment_magnitude > 0.4
{
return TriageCategory::Emotional;
}
// Future-relevant (intentions, TODOs)
let content_lower = node.content.to_lowercase();
if content_lower.contains("todo") || content_lower.contains("remind")
|| content_lower.contains("intention") || content_lower.contains("next time")
|| content_lower.contains("plan to") {
if content_lower.contains("todo")
|| content_lower.contains("remind")
|| content_lower.contains("intention")
|| content_lower.contains("next time")
|| content_lower.contains("plan to")
{
return TriageCategory::FutureRelevant;
}
@ -403,7 +417,8 @@ impl DreamEngine {
actions.push(format!(
"Processed {} waves of {} memories",
wave_count, replay_queue.len()
wave_count,
replay_queue.len()
));
actions.push(format!(
"Strengthened {} memories via synaptic tagging",
@ -459,7 +474,11 @@ impl DreamEngine {
// Group memories by primary tag for cross-domain pairing
let mut tag_groups: HashMap<String, Vec<&TriagedMemory>> = HashMap::new();
for tm in triaged {
let primary_tag = tm.tags.first().cloned().unwrap_or_else(|| "untagged".to_string());
let primary_tag = tm
.tags
.first()
.cloned()
.unwrap_or_else(|| "untagged".to_string());
tag_groups.entry(primary_tag).or_default().push(tm);
}
@ -487,7 +506,11 @@ impl DreamEngine {
if similarity > self.min_insight_confidence {
let conn_type = self.classify_connection(mem_a, mem_b, similarity);
let insight = self.generate_connection_insight(
mem_a, mem_b, &tag_keys[i], &tag_keys[j], conn_type,
mem_a,
mem_b,
&tag_keys[i],
&tag_keys[j],
conn_type,
);
connections.push(CreativeConnection {
@ -531,7 +554,10 @@ impl DreamEngine {
// Pattern extraction: find repeated patterns across memories
let pattern_count = self.extract_patterns(triaged, &mut connections);
if pattern_count > 0 {
actions.push(format!("Pattern extraction: {} shared patterns found", pattern_count));
actions.push(format!(
"Pattern extraction: {} shared patterns found",
pattern_count
));
}
let phase = PhaseResult {
@ -545,11 +571,13 @@ impl DreamEngine {
}
fn content_similarity(&self, a: &str, b: &str) -> f64 {
let words_a: HashSet<&str> = a.split_whitespace()
let words_a: HashSet<&str> = a
.split_whitespace()
.map(|w| w.trim_matches(|c: char| !c.is_alphanumeric()))
.filter(|w| w.len() > 3)
.collect();
let words_b: HashSet<&str> = b.split_whitespace()
let words_b: HashSet<&str> = b
.split_whitespace()
.map(|w| w.trim_matches(|c: char| !c.is_alphanumeric()))
.filter(|w| w.len() > 3)
.collect();
@ -598,8 +626,16 @@ impl DreamEngine {
tag_b: &str,
conn_type: CreativeConnectionType,
) -> String {
let a_summary = if a.content.len() > 60 { &a.content[..60] } else { &a.content };
let b_summary = if b.content.len() > 60 { &b.content[..60] } else { &b.content };
let a_summary = if a.content.len() > 60 {
&a.content[..60]
} else {
&a.content
};
let b_summary = if b.content.len() > 60 {
&b.content[..60]
} else {
&b.content
};
match conn_type {
CreativeConnectionType::CrossDomain => {
@ -638,7 +674,9 @@ impl DreamEngine {
let mut bigram_index: HashMap<(String, String), Vec<usize>> = HashMap::new();
for (idx, tm) in triaged.iter().enumerate() {
let words: Vec<String> = tm.content.split_whitespace()
let words: Vec<String> = tm
.content
.split_whitespace()
.map(|w| w.to_lowercase())
.filter(|w| w.len() > 3)
.collect();
@ -656,18 +694,21 @@ impl DreamEngine {
pattern_count += 1;
// Create a connection between the first and last memory sharing this pattern
if let (Some(&first), Some(&last)) = (indices.first(), indices.last())
&& first != last {
connections.push(CreativeConnection {
memory_a_id: triaged[first].id.clone(),
memory_b_id: triaged[last].id.clone(),
insight: format!(
"Shared pattern '{} {}' found across {} memories",
bigram.0, bigram.1, indices.len()
),
confidence: (indices.len() as f64 / triaged.len() as f64).min(1.0),
connection_type: CreativeConnectionType::CrossDomain,
});
}
&& first != last
{
connections.push(CreativeConnection {
memory_a_id: triaged[first].id.clone(),
memory_b_id: triaged[last].id.clone(),
insight: format!(
"Shared pattern '{} {}' found across {} memories",
bigram.0,
bigram.1,
indices.len()
),
confidence: (indices.len() as f64 / triaged.len() as f64).min(1.0),
connection_type: CreativeConnectionType::CrossDomain,
});
}
}
}
@ -692,7 +733,8 @@ impl DreamEngine {
let mut actions = Vec::new();
// Validate connections: keep only those above threshold
let valid_connections: Vec<&CreativeConnection> = connections.iter()
let valid_connections: Vec<&CreativeConnection> = connections
.iter()
.filter(|c| c.confidence >= self.validation_threshold)
.collect();
@ -739,7 +781,9 @@ impl DreamEngine {
insights.sort_by(|a, b| {
let score_a = a.confidence * a.novelty;
let score_b = b.confidence * b.novelty;
score_b.partial_cmp(&score_a).unwrap_or(std::cmp::Ordering::Equal)
score_b
.partial_cmp(&score_a)
.unwrap_or(std::cmp::Ordering::Equal)
});
// Cap at 20 insights
@ -753,7 +797,10 @@ impl DreamEngine {
} else {
triaged.iter().map(|m| m.retention_strength).sum::<f64>() / triaged.len() as f64
};
actions.push(format!("Average retention across dreamed memories: {:.2}", avg_retention));
actions.push(format!(
"Average retention across dreamed memories: {:.2}",
avg_retention
));
let phase = PhaseResult {
phase: DreamPhase::Integration,
@ -840,6 +887,8 @@ mod tests {
temporal_level: None,
has_embedding: None,
embedding_model: None,
suppression_count: 0,
suppressed_at: None,
}
}
@ -863,13 +912,15 @@ mod tests {
let importance = ImportanceSignals::new();
let mut synaptic = SynapticTaggingSystem::new();
let memories: Vec<KnowledgeNode> = (0..10).map(|i| {
make_test_node(
&format!("mem-{}", i),
&format!("Test memory content for dream cycle number {}", i),
&["test"],
)
}).collect();
let memories: Vec<KnowledgeNode> = (0..10)
.map(|i| {
make_test_node(
&format!("mem-{}", i),
&format!("Test memory content for dream cycle number {}", i),
&["test"],
)
})
.collect();
let result = engine.run(&memories, &mut emotional, &importance, &mut synaptic);
@ -890,7 +941,11 @@ mod tests {
let memories = vec![
make_emotional_node("emo-1", "Critical production crash error panic!", 0.9),
make_test_node("future-1", "TODO: remind me to add caching next time", &["planning"]),
make_test_node(
"future-1",
"TODO: remind me to add caching next time",
&["planning"],
),
make_test_node("standard-1", "The function returns a string", &["docs"]),
];
@ -915,13 +970,15 @@ mod tests {
let mut emotional = EmotionalMemory::new();
let importance = ImportanceSignals::new();
let memories: Vec<KnowledgeNode> = (0..20).map(|i| {
make_test_node(
&format!("mem-{}", i),
&format!("Memory with varying importance content {}", i),
&["test"],
)
}).collect();
let memories: Vec<KnowledgeNode> = (0..20)
.map(|i| {
make_test_node(
&format!("mem-{}", i),
&format!("Memory with varying importance content {}", i),
&["test"],
)
})
.collect();
let (_triaged, queue, _phase) = engine.phase_nrem1(&memories, &mut emotional, &importance);
@ -934,8 +991,8 @@ mod tests {
let engine = DreamEngine::new();
let mut synaptic = SynapticTaggingSystem::new();
let triaged: Vec<TriagedMemory> = (0..10).map(|i| {
TriagedMemory {
let triaged: Vec<TriagedMemory> = (0..10)
.map(|i| TriagedMemory {
id: format!("mem-{}", i),
content: format!("Test memory {}", i),
importance: 0.5,
@ -945,8 +1002,8 @@ mod tests {
retention_strength: 0.7,
emotional_valence: 0.0,
is_flashbulb: false,
}
}).collect();
})
.collect();
let replay_queue: Vec<String> = triaged.iter().map(|m| m.id.clone()).collect();
@ -1031,7 +1088,10 @@ mod tests {
assert_eq!(phase.phase, DreamPhase::Rem);
// Should find connection via shared "error handling" and "pattern" words
assert!(!connections.is_empty(), "Should find cross-domain error handling pattern");
assert!(
!connections.is_empty(),
"Should find cross-domain error handling pattern"
);
}
#[test]
@ -1039,23 +1099,25 @@ mod tests {
let engine = DreamEngine::new();
let mut emotional = EmotionalMemory::new();
let triaged = vec![
TriagedMemory {
id: "angry-1".to_string(),
content: "Critical production error crashed the entire system".to_string(),
importance: 0.8,
category: TriageCategory::Emotional,
tags: vec!["incident".to_string()],
created_at: Utc::now(),
retention_strength: 0.9,
emotional_valence: -0.8,
is_flashbulb: false,
},
];
let triaged = vec![TriagedMemory {
id: "angry-1".to_string(),
content: "Critical production error crashed the entire system".to_string(),
importance: 0.8,
category: TriageCategory::Emotional,
tags: vec!["incident".to_string()],
created_at: Utc::now(),
retention_strength: 0.9,
emotional_valence: -0.8,
is_flashbulb: false,
}];
let (_connections, emotional_processed, _phase) = engine.phase_rem(&triaged, &mut emotional);
let (_connections, emotional_processed, _phase) =
engine.phase_rem(&triaged, &mut emotional);
assert_eq!(emotional_processed, 1, "Negative emotional memory should be processed");
assert_eq!(
emotional_processed, 1,
"Negative emotional memory should be processed"
);
}
#[test]
@ -1120,7 +1182,11 @@ mod tests {
"error handling with Result type pattern",
"error handling with try-catch pattern",
);
assert!(sim > 0.2, "Similar content should have >0.2 Jaccard: {}", sim);
assert!(
sim > 0.2,
"Similar content should have >0.2 Jaccard: {}",
sim
);
let dissim = engine.content_similarity(
"Rust memory management with ownership",
@ -1151,16 +1217,19 @@ mod tests {
let importance = ImportanceSignals::new();
let mut synaptic = SynapticTaggingSystem::new();
let memories: Vec<KnowledgeNode> = (0..5).map(|i| {
make_test_node(&format!("m{}", i), &format!("Content {}", i), &["test"])
}).collect();
let memories: Vec<KnowledgeNode> = (0..5)
.map(|i| make_test_node(&format!("m{}", i), &format!("Content {}", i), &["test"]))
.collect();
let result = engine.run(&memories, &mut emotional, &importance, &mut synaptic);
for phase in &result.phases {
// Duration should be non-negative (might be 0ms for fast operations)
assert!(phase.duration_ms < 10000);
assert!(!phase.actions.is_empty(), "Each phase should report actions");
assert!(
!phase.actions.is_empty(),
"Each phase should report actions"
);
}
}
@ -1170,7 +1239,11 @@ mod tests {
let mut emotional = EmotionalMemory::new();
let importance = ImportanceSignals::new();
let mut node = make_test_node("flash-1", "CRITICAL: Production server crash! Emergency rollback needed immediately!", &["incident"]);
let mut node = make_test_node(
"flash-1",
"CRITICAL: Production server crash! Emergency rollback needed immediately!",
&["incident"],
);
node.sentiment_magnitude = 0.9;
let (triaged, _queue, phase) = engine.phase_nrem1(&[node], &mut emotional, &importance);

View file

@ -15,8 +15,8 @@ mod local;
pub(crate) use local::get_cache_dir;
pub use local::{
cosine_similarity, dot_product, euclidean_distance, matryoshka_truncate, Embedding,
EmbeddingError, EmbeddingService, BATCH_SIZE, EMBEDDING_DIMENSIONS, MAX_TEXT_LENGTH,
BATCH_SIZE, EMBEDDING_DIMENSIONS, Embedding, EmbeddingError, EmbeddingService, MAX_TEXT_LENGTH,
cosine_similarity, dot_product, euclidean_distance, matryoshka_truncate,
};
pub use code::CodeEmbedding;

View file

@ -348,7 +348,8 @@ mod tests {
#[test]
fn test_fsrs6_constants() {
assert_eq!(FSRS6_WEIGHTS.len(), 21);
assert!(FSRS6_WEIGHTS[20] > 0.0 && FSRS6_WEIGHTS[20] < 1.0);
let w20 = FSRS6_WEIGHTS[20];
assert!(w20 > 0.0 && w20 < 1.0);
}
#[test]

View file

@ -19,6 +19,14 @@ mod optimizer;
mod scheduler;
pub use algorithm::{
DEFAULT_DECAY,
DEFAULT_RETENTION,
// Constants
FSRS6_WEIGHTS,
MAX_DIFFICULTY,
MAX_STABILITY,
MIN_DIFFICULTY,
MIN_STABILITY,
apply_sentiment_boost,
fuzz_interval,
initial_difficulty,
@ -38,14 +46,6 @@ pub use algorithm::{
retrievability_with_decay,
same_day_stability,
same_day_stability_with_weights,
DEFAULT_DECAY,
DEFAULT_RETENTION,
// Constants
FSRS6_WEIGHTS,
MAX_DIFFICULTY,
MAX_STABILITY,
MIN_DIFFICULTY,
MIN_STABILITY,
};
pub use scheduler::{

View file

@ -3,7 +3,7 @@
//! Personalizes FSRS parameters based on user review history.
//! Uses gradient-free optimization to minimize prediction error.
use super::algorithm::{retrievability_with_decay, FSRS6_WEIGHTS};
use super::algorithm::{FSRS6_WEIGHTS, retrievability_with_decay};
use chrono::{DateTime, Utc};
// ============================================================================

View file

@ -7,11 +7,10 @@ use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use super::algorithm::{
apply_sentiment_boost, fuzz_interval, initial_difficulty_with_weights,
initial_stability_with_weights, next_difficulty_with_weights,
DEFAULT_RETENTION, FSRS6_WEIGHTS, MAX_STABILITY, apply_sentiment_boost, fuzz_interval,
initial_difficulty_with_weights, initial_stability_with_weights, next_difficulty_with_weights,
next_forget_stability_with_weights, next_interval_with_decay,
next_recall_stability_with_weights, retrievability_with_decay, same_day_stability_with_weights,
DEFAULT_RETENTION, FSRS6_WEIGHTS, MAX_STABILITY,
};
// ============================================================================
@ -243,13 +242,11 @@ impl FSRSScheduler {
// Apply sentiment boost
if self.enable_sentiment_boost
&& let Some(sentiment) = sentiment_boost
&& sentiment > 0.0 {
new_state.stability = apply_sentiment_boost(
new_state.stability,
sentiment,
self.max_sentiment_boost,
);
}
&& sentiment > 0.0
{
new_state.stability =
apply_sentiment_boost(new_state.stability, sentiment, self.max_sentiment_boost);
}
let mut interval =
next_interval_with_decay(new_state.stability, self.params.desired_retention, w20)
@ -436,9 +433,11 @@ mod tests {
#[test]
fn test_custom_parameters() {
let mut params = FSRSParameters::default();
params.desired_retention = 0.85;
params.enable_fuzz = false;
let params = FSRSParameters {
desired_retention: 0.85,
enable_fuzz: false,
..FSRSParameters::default()
};
let scheduler = FSRSScheduler::new(params);
let card = scheduler.new_card();

View file

@ -84,10 +84,7 @@ mod tests {
#[test]
fn test_sanitize_fts5_query_special_chars() {
assert_eq!(sanitize_fts5_query("hello* world"), "\"hello world\"");
assert_eq!(
sanitize_fts5_query("content:secret"),
"\"content secret\""
);
assert_eq!(sanitize_fts5_query("content:secret"), "\"content secret\"");
assert_eq!(sanitize_fts5_query("^boost"), "\"boost\"");
}

View file

@ -114,20 +114,27 @@ pub mod neuroscience;
// Memory types
pub use memory::{
ConsolidationResult, EmbeddingResult, IngestInput, KnowledgeNode, MatchType, MemoryStats,
NodeType, RecallInput, SearchMode, SearchResult, SimilarityResult, TemporalRange,
ConsolidationResult,
// GOD TIER 2026: New types
EdgeType, KnowledgeEdge, MemoryScope, MemorySystem,
EdgeType,
EmbeddingResult,
IngestInput,
KnowledgeEdge,
KnowledgeNode,
MatchType,
MemoryScope,
MemoryStats,
MemorySystem,
NodeType,
RecallInput,
SearchMode,
SearchResult,
SimilarityResult,
TemporalRange,
};
// FSRS-6 algorithm
pub use fsrs::{
initial_difficulty,
initial_stability,
next_interval,
// Core functions for advanced usage
retrievability,
retrievability_with_decay,
FSRSParameters,
FSRSScheduler,
FSRSState,
@ -135,6 +142,12 @@ pub use fsrs::{
PreviewResults,
Rating,
ReviewResult,
initial_difficulty,
initial_stability,
next_interval,
// Core functions for advanced usage
retrievability,
retrievability_with_decay,
};
// Storage layer
@ -146,9 +159,8 @@ pub use storage::{
// Consolidation (sleep-inspired memory processing)
pub use consolidation::SleepConsolidation;
pub use consolidation::{
DreamEngine, DreamPhase, FourPhaseDreamResult, PhaseResult,
TriagedMemory, TriageCategory, CreativeConnection, CreativeConnectionType,
DreamInsight,
CreativeConnection, CreativeConnectionType, DreamEngine, DreamInsight, DreamPhase,
FourPhaseDreamResult, PhaseResult, TriageCategory, TriagedMemory,
};
// Advanced features (bleeding edge 2026)
@ -162,6 +174,8 @@ pub use advanced::{
AdaptiveEmbedder,
ApplicableKnowledge,
AppliedModification,
// Prediction Error Gating (solves bad vs good similar memory problem)
CandidateMemory,
ChainStep,
ChangeSummary,
CompressedMemory,
@ -175,16 +189,20 @@ pub use advanced::{
// Sleep consolidation (automatic background consolidation)
ConsolidationScheduler,
ContentType,
CreateReason,
// Cross-project learning
CrossProjectLearner,
DetectedIntent,
DiscoveredConnection,
DiscoveredConnectionType,
DreamConfig,
// DreamMemory - input type for dreaming
DreamMemory,
DiscoveredConnection,
DiscoveredConnectionType,
DreamResult,
EmbeddingStrategy,
EvaluationIntent,
GateDecision,
GateStats,
ImportanceDecayConfig,
ImportanceScore,
// Importance tracking
@ -204,11 +222,14 @@ pub use advanced::{
MemoryPath,
MemoryReplay,
MemorySnapshot,
MergeStrategy,
Modification,
Pattern,
PatternType,
PredictedMemory,
PredictionContext,
PredictionErrorConfig,
PredictionErrorGate,
ProjectContext,
ReasoningChain,
ReconsolidatedMemory,
@ -217,25 +238,16 @@ pub use advanced::{
ReconsolidationStats,
RelationshipType,
RetrievalRecord,
SimilarityResult as PredictionSimilarityResult,
// Speculative retrieval
SpeculativeRetriever,
SupersedeReason,
SynthesizedInsight,
UniversalPattern,
UpdateType,
UsageEvent,
UsagePattern,
UserAction,
// Prediction Error Gating (solves bad vs good similar memory problem)
CandidateMemory,
CreateReason,
EvaluationIntent,
GateDecision,
GateStats,
MergeStrategy,
PredictionErrorConfig,
PredictionErrorGate,
SimilarityResult as PredictionSimilarityResult,
SupersedeReason,
UpdateType,
};
// Codebase memory (Vestige's killer differentiator)
@ -315,14 +327,20 @@ pub use neuroscience::{
ContextReinstatement,
ContextWeights,
DecayFunction,
// Emotional Memory (Brown & Kulik 1977, Bower 1981, LaBar & Cabeza 2006)
EmotionCategory,
EmotionalContext,
EmotionalEvaluation,
EmotionalMarker,
EmotionalMemory,
EmotionalMemoryStats,
EncodingContext,
FullMemory,
// Hippocampal Indexing (Teyler & Rudy, 2007)
HippocampalIndex,
HippocampalIndexConfig,
HippocampalIndexError,
INDEX_EMBEDDING_DIM,
ImportanceCluster,
ImportanceConsolidationConfig,
ImportanceEncodingConfig,
@ -374,40 +392,34 @@ pub use neuroscience::{
TemporalMarker,
TimeOfDay,
TopicalContext,
INDEX_EMBEDDING_DIM,
// Emotional Memory (Brown & Kulik 1977, Bower 1981, LaBar & Cabeza 2006)
EmotionCategory,
EmotionalEvaluation,
EmotionalMemory,
EmotionalMemoryStats,
};
// Embeddings (when feature enabled)
#[cfg(feature = "embeddings")]
pub use embeddings::{
cosine_similarity, euclidean_distance, Embedding, EmbeddingError, EmbeddingService,
EMBEDDING_DIMENSIONS,
EMBEDDING_DIMENSIONS, Embedding, EmbeddingError, EmbeddingService, cosine_similarity,
euclidean_distance,
};
// Search (when feature enabled)
#[cfg(feature = "vector-search")]
pub use search::{
linear_combination,
reciprocal_rank_fusion,
HybridSearchConfig,
// Hybrid search
HybridSearcher,
// Keyword search
KeywordSearcher,
VectorIndex,
VectorIndexConfig,
VectorIndexStats,
VectorSearchError,
RerankedResult,
// GOD TIER 2026: Reranking
Reranker,
RerankerConfig,
RerankerError,
RerankedResult,
VectorIndex,
VectorIndexConfig,
VectorIndexStats,
VectorSearchError,
linear_combination,
reciprocal_rank_fusion,
};
// ============================================================================
@ -450,6 +462,8 @@ pub mod prelude {
// Sleep consolidation
ConsolidationScheduler,
CrossProjectLearner,
EvaluationIntent,
GateDecision,
ImportanceTracker,
IntentDetector,
LabileState,
@ -459,14 +473,12 @@ pub mod prelude {
MemoryReplay,
Modification,
PredictedMemory,
// Prediction Error Gating
PredictionErrorGate,
ReconsolidatedMemory,
// Reconsolidation
ReconsolidationManager,
SpeculativeRetriever,
// Prediction Error Gating
PredictionErrorGate,
GateDecision,
EvaluationIntent,
};
// Codebase memory

View file

@ -299,7 +299,6 @@ pub struct ConsolidationResult {
pub w20_optimized: Option<f64>,
}
// ============================================================================
// SEARCH RESULTS
// ============================================================================
@ -360,4 +359,3 @@ pub struct EmbeddingResult {
/// Error messages for failures
pub errors: Vec<String>,
}

View file

@ -179,6 +179,15 @@ pub struct KnowledgeNode {
/// Which model generated the embedding
#[serde(skip_serializing_if = "Option::is_none")]
pub embedding_model: Option<String>,
// ========== Active Forgetting (v2.0.5, Anderson 2025 + Davis Rac1) ==========
/// Top-down suppression count — compounds with each `suppress` call
/// (Suppression-Induced Forgetting, Anderson 2025).
#[serde(default)]
pub suppression_count: i32,
/// Timestamp of the most recent suppression (for 24h labile window).
#[serde(skip_serializing_if = "Option::is_none")]
pub suppressed_at: Option<DateTime<Utc>>,
}
impl Default for KnowledgeNode {
@ -213,6 +222,8 @@ impl Default for KnowledgeNode {
temporal_level: None,
has_embedding: None,
embedding_model: None,
suppression_count: 0,
suppressed_at: None,
}
}
}

View file

@ -0,0 +1,226 @@
//! Active Forgetting — Top-Down Inhibitory Control of Memory (v2.0.5)
//!
//! Implements user-initiated memory suppression, distinct from passive FSRS
//! decay and from bottom-up retrieval-induced forgetting (Anderson 1994,
//! `memory_states.rs`). This module models the right-lateral-prefrontal-cortex
//! gated inhibitory pathway, where top-down cognitive control compounds with
//! each stopping attempt (Suppression-Induced Forgetting) and spreads via a
//! Rac1-GTPase-like cascade to co-activated synaptic neighbors.
//!
//! ## References
//!
//! - Anderson, M. C., Hanslmayr, S., & Quaegebeur, L. (2025). Brain mechanisms
//! underlying the inhibitory control of thought. *Nature Reviews Neuroscience*.
//! DOI: 10.1038/s41583-025-00929-y. Establishes rDLPFC as the domain-general
//! inhibitory controller; SIF scales with stopping attempts; incentive-resistant.
//! - Cervantes-Sandoval, I., Chakraborty, M., MacMullen, C., & Davis, R. L.
//! (2020). Rac1 Impairs Forgetting-Induced Cellular Plasticity in Mushroom
//! Body Output Neurons. *Front Cell Neurosci*. PMC7477079. Establishes Rac1
//! GTPase as the active synaptic destabilization mechanism.
//!
//! ## Contrast with existing modules
//!
//! - `memory_states.rs` (Anderson 1994, RIF): BOTTOM-UP, passive consequence
//! of retrieval competition. When memory A wins a query, its competitors
//! automatically lose retrievability.
//! - `active_forgetting.rs` (Anderson 2025, SIF + Davis Rac1): TOP-DOWN,
//! user-initiated via the `suppress` MCP tool. Compounds with each call.
//! Spreads to neighbors. Reversible within a 24h labile window.
use chrono::{DateTime, Duration, Utc};
use serde::{Deserialize, Serialize};
/// Default SIF penalty coefficient per suppression increment.
pub const DEFAULT_SIF_K: f64 = 0.15;
/// Maximum cumulative penalty from compounding suppression.
/// Matches Anderson's empirical SIF saturation.
pub const DEFAULT_MAX_PENALTY: f64 = 0.8;
/// Cascade attenuation factor for Rac1 spreading to co-activated neighbors.
pub const DEFAULT_CASCADE_DECAY: f64 = 0.3;
/// Labile window in hours during which a suppression may be reversed.
/// Parallels Nader's 5-minute reconsolidation window on a 24-hour axis.
pub const DEFAULT_LABILE_HOURS: i64 = 24;
/// Maximum per-neighbor retrieval-strength decrement during cascade.
pub const DEFAULT_CASCADE_RETRIEVAL_DECREMENT_CAP: f64 = 0.15;
/// Top-down inhibitory control over memory retrieval.
///
/// Stateless — all persistent state lives on the `knowledge_nodes` table
/// (columns `suppression_count`, `suppressed_at`). This struct exposes pure
/// helper functions consumed by `Storage::suppress_memory`,
/// `Storage::reverse_suppression`, `Storage::apply_rac1_cascade`, and the
/// `search_unified` score adjustment stage.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ActiveForgettingSystem {
/// Penalty coefficient per suppression increment (SIF).
pub k: f64,
/// Maximum cumulative penalty cap.
pub max_penalty: f64,
/// Cascade attenuation factor for Rac1 spreading.
pub cascade_decay: f64,
/// Reversal window in hours.
pub labile_hours: i64,
}
impl Default for ActiveForgettingSystem {
fn default() -> Self {
Self {
k: DEFAULT_SIF_K,
max_penalty: DEFAULT_MAX_PENALTY,
cascade_decay: DEFAULT_CASCADE_DECAY,
labile_hours: DEFAULT_LABILE_HOURS,
}
}
}
impl ActiveForgettingSystem {
/// Create a new system with default parameters.
pub fn new() -> Self {
Self::default()
}
/// Compute the retrieval-score penalty for a memory with the given
/// suppression count. Penalty grows linearly then saturates at
/// `max_penalty` (Anderson's empirical SIF ceiling).
///
/// Applied in `search_unified` as `score *= (1.0 - penalty)`.
pub fn retrieval_penalty(&self, suppression_count: i32) -> f64 {
if suppression_count <= 0 {
return 0.0;
}
(self.k * suppression_count as f64).min(self.max_penalty)
}
/// Return `true` if a suppression is within the labile window and
/// therefore reversible. Matches reconsolidation semantics on a 24h axis.
pub fn is_reversible(&self, suppressed_at: DateTime<Utc>) -> bool {
Utc::now() - suppressed_at < Duration::hours(self.labile_hours)
}
/// Stability multiplier to apply to a neighbor of a suppressed memory
/// during the Rac1 cascade. Stronger co-activation edges propagate more
/// decay. A 1.0 edge yields `(1 - cascade_decay)` = 0.7 by default
/// (30% stability loss per cascade hop), clamped never below 0.1.
pub fn cascade_stability_factor(&self, edge_strength: f64) -> f64 {
(1.0 - self.cascade_decay * edge_strength.clamp(0.0, 1.0)).max(0.1)
}
/// Retrieval-strength decrement for a cascade neighbor, proportional to
/// co-activation edge strength and capped at
/// `DEFAULT_CASCADE_RETRIEVAL_DECREMENT_CAP`.
pub fn cascade_retrieval_decrement(&self, edge_strength: f64) -> f64 {
(0.05 * edge_strength.clamp(0.0, 1.0)).min(DEFAULT_CASCADE_RETRIEVAL_DECREMENT_CAP)
}
/// Time remaining in the labile window, or `None` if expired.
pub fn remaining_labile_time(&self, suppressed_at: DateTime<Utc>) -> Option<Duration> {
let window = Duration::hours(self.labile_hours);
let elapsed = Utc::now() - suppressed_at;
if elapsed >= window {
None
} else {
Some(window - elapsed)
}
}
/// Deadline timestamp after which reversal will fail.
pub fn reversible_until(&self, suppressed_at: DateTime<Utc>) -> DateTime<Utc> {
suppressed_at + Duration::hours(self.labile_hours)
}
}
/// Aggregate statistics about active-forgetting state across all memories.
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct SuppressionStats {
/// Total memories with suppression_count > 0.
pub total_suppressed: usize,
/// Memories suppressed within the last `labile_hours` (still reversible).
pub recently_reversible: usize,
/// Mean suppression_count across all suppressed memories.
pub avg_suppression_count: f64,
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_sif_penalty_compounds() {
let sys = ActiveForgettingSystem::new();
assert_eq!(sys.retrieval_penalty(0), 0.0);
assert!((sys.retrieval_penalty(1) - 0.15).abs() < 1e-9);
assert!((sys.retrieval_penalty(2) - 0.30).abs() < 1e-9);
assert!((sys.retrieval_penalty(5) - 0.75).abs() < 1e-9);
// Saturates at max_penalty
assert!((sys.retrieval_penalty(6) - 0.80).abs() < 1e-9);
assert!((sys.retrieval_penalty(100) - 0.80).abs() < 1e-9);
}
#[test]
fn test_labile_window_reversible() {
let sys = ActiveForgettingSystem::new();
let recent = Utc::now() - Duration::hours(23);
assert!(sys.is_reversible(recent));
let expired = Utc::now() - Duration::hours(25);
assert!(!sys.is_reversible(expired));
assert!(sys.is_reversible(Utc::now()));
}
#[test]
fn test_cascade_attenuation() {
let sys = ActiveForgettingSystem::new();
let strong = sys.cascade_stability_factor(0.9);
let weak = sys.cascade_stability_factor(0.1);
assert!(strong < weak, "strong edges should propagate more decay");
// Zero edge → no decay
assert!((sys.cascade_stability_factor(0.0) - 1.0).abs() < 1e-9);
// Factor never zeroes out
assert!(sys.cascade_stability_factor(1.0) >= 0.1);
}
#[test]
fn test_default_params_reasonable() {
let sys = ActiveForgettingSystem::new();
assert!(sys.k > 0.0 && sys.k <= 0.25, "k should be in (0, 0.25]");
assert!(
sys.max_penalty >= 0.5 && sys.max_penalty <= 0.95,
"max_penalty should be in [0.5, 0.95]"
);
assert!(sys.labile_hours >= 12 && sys.labile_hours <= 72);
assert!(sys.cascade_decay > 0.0 && sys.cascade_decay < 1.0);
}
#[test]
fn test_reversible_until_deadline() {
let sys = ActiveForgettingSystem::new();
let now = Utc::now();
let deadline = sys.reversible_until(now);
let expected = now + Duration::hours(24);
assert!((deadline - expected).num_milliseconds().abs() < 100);
}
#[test]
fn test_remaining_labile_time_expired_returns_none() {
let sys = ActiveForgettingSystem::new();
let past = Utc::now() - Duration::hours(30);
assert!(sys.remaining_labile_time(past).is_none());
let recent = Utc::now() - Duration::hours(10);
let remaining = sys.remaining_labile_time(recent);
assert!(remaining.is_some());
// Should have ~14 hours left (24h window - 10h elapsed)
let hours_left = remaining.unwrap().num_hours();
assert!((13..=14).contains(&hours_left));
}
#[test]
fn test_cascade_retrieval_decrement_capped() {
let sys = ActiveForgettingSystem::new();
assert!((sys.cascade_retrieval_decrement(0.0) - 0.0).abs() < 1e-9);
assert!(sys.cascade_retrieval_decrement(0.5) <= DEFAULT_CASCADE_RETRIEVAL_DECREMENT_CAP);
assert!(sys.cascade_retrieval_decrement(1.0) <= DEFAULT_CASCADE_RETRIEVAL_DECREMENT_CAP);
}
}

View file

@ -911,33 +911,38 @@ impl ContextMatcher {
// Same session is a very strong match
if let (Some(e_id), Some(r_id)) = (&encoding.session_id, &retrieval.session_id)
&& e_id == r_id {
return 1.0;
}
&& e_id == r_id
{
return 1.0;
}
// Project match (0.4 weight)
if let (Some(e_proj), Some(r_proj)) = (&encoding.project, &retrieval.project)
&& e_proj == r_proj {
score += 0.4;
}
&& e_proj == r_proj
{
score += 0.4;
}
// Activity type match (0.3 weight)
if let (Some(e_act), Some(r_act)) = (&encoding.activity_type, &retrieval.activity_type)
&& e_act == r_act {
score += 0.3;
}
&& e_act == r_act
{
score += 0.3;
}
// Git branch match (0.2 weight)
if let (Some(e_br), Some(r_br)) = (&encoding.git_branch, &retrieval.git_branch)
&& e_br == r_br {
score += 0.2;
}
&& e_br == r_br
{
score += 0.2;
}
// Active file match (0.1 weight)
if let (Some(e_file), Some(r_file)) = (&encoding.active_file, &retrieval.active_file)
&& e_file == r_file {
score += 0.1;
}
&& e_file == r_file
{
score += 0.1;
}
score
}
@ -985,7 +990,11 @@ impl ContextMatcher {
.collect();
// Sort by combined score (descending)
scored.sort_by(|a, b| b.combined_score.partial_cmp(&a.combined_score).unwrap_or(std::cmp::Ordering::Equal));
scored.sort_by(|a, b| {
b.combined_score
.partial_cmp(&a.combined_score)
.unwrap_or(std::cmp::Ordering::Equal)
});
scored
}
@ -1103,9 +1112,11 @@ mod tests {
topical.add_topic("security");
topical.extract_keywords_from("implementing OAuth2 authentication flow");
assert!(topical
.active_topics
.contains(&"authentication".to_string()));
assert!(
topical
.active_topics
.contains(&"authentication".to_string())
);
assert!(topical.keywords.contains(&"oauth2".to_string()));
let terms = topical.all_terms();
@ -1118,10 +1129,11 @@ mod tests {
ctx.add_topic("api-design");
ctx.set_project("vestige");
assert!(ctx
.topical
.active_topics
.contains(&"api-design".to_string()));
assert!(
ctx.topical
.active_topics
.contains(&"api-design".to_string())
);
assert_eq!(ctx.session.project, Some("vestige".to_string()));
}
@ -1137,7 +1149,11 @@ mod tests {
let ctx2 = ctx1.clone();
let similarity = matcher.match_contexts(&ctx1, &ctx2);
assert!(similarity > 0.8, "Same context should have high similarity, got {}", similarity);
assert!(
similarity > 0.8,
"Same context should have high similarity, got {}",
similarity
);
}
#[test]

View file

@ -216,15 +216,25 @@ impl EmotionalMemory {
// Check negation context (simple window-based)
let negation_words: Vec<&str> = vec![
"not", "no", "never", "don't", "doesn't", "didn't", "won't",
"can't", "couldn't", "shouldn't", "without", "hardly",
"not",
"no",
"never",
"don't",
"doesn't",
"didn't",
"won't",
"can't",
"couldn't",
"shouldn't",
"without",
"hardly",
];
for (i, word) in words.iter().enumerate() {
if let Some(&(valence, arousal)) = self.lexicon.get(word.as_str()) {
// Check for negation in 3-word window before
let negated = (i.saturating_sub(3)..i)
.any(|j| negation_words.contains(&words[j].as_str()));
let negated =
(i.saturating_sub(3)..i).any(|j| negation_words.contains(&words[j].as_str()));
let effective_valence = if negated { -valence * 0.7 } else { valence };
@ -269,9 +279,14 @@ impl EmotionalMemory {
};
// Flashbulb detection: high novelty proxy (urgency/surprise markers) + high arousal
let novelty_proxy = urgency_boost + if category == EmotionCategory::Surprise { 0.4 } else { 0.0 };
let is_flashbulb = novelty_proxy >= FLASHBULB_NOVELTY_THRESHOLD
&& arousal >= FLASHBULB_AROUSAL_THRESHOLD;
let novelty_proxy = urgency_boost
+ if category == EmotionCategory::Surprise {
0.4
} else {
0.0
};
let is_flashbulb =
novelty_proxy >= FLASHBULB_NOVELTY_THRESHOLD && arousal >= FLASHBULB_AROUSAL_THRESHOLD;
if is_flashbulb {
self.flashbulbs_detected += 1;
@ -447,66 +462,114 @@ impl EmotionalMemory {
// Positive / Low arousal
for (word, v, a) in [
("good", 0.6, 0.3), ("nice", 0.5, 0.2), ("clean", 0.4, 0.2),
("simple", 0.3, 0.1), ("smooth", 0.4, 0.2), ("stable", 0.4, 0.1),
("helpful", 0.5, 0.3), ("elegant", 0.6, 0.3), ("solid", 0.4, 0.2),
("good", 0.6, 0.3),
("nice", 0.5, 0.2),
("clean", 0.4, 0.2),
("simple", 0.3, 0.1),
("smooth", 0.4, 0.2),
("stable", 0.4, 0.1),
("helpful", 0.5, 0.3),
("elegant", 0.6, 0.3),
("solid", 0.4, 0.2),
] {
lex.insert(word.to_string(), (v, a));
}
// Positive / High arousal
for (word, v, a) in [
("amazing", 0.9, 0.8), ("excellent", 0.8, 0.6), ("perfect", 0.9, 0.7),
("awesome", 0.8, 0.7), ("great", 0.7, 0.5), ("fantastic", 0.9, 0.8),
("brilliant", 0.8, 0.7), ("incredible", 0.9, 0.8), ("love", 0.8, 0.7),
("success", 0.7, 0.6), ("solved", 0.7, 0.6), ("fixed", 0.6, 0.5),
("working", 0.5, 0.4), ("breakthrough", 0.9, 0.9), ("discovered", 0.7, 0.7),
("amazing", 0.9, 0.8),
("excellent", 0.8, 0.6),
("perfect", 0.9, 0.7),
("awesome", 0.8, 0.7),
("great", 0.7, 0.5),
("fantastic", 0.9, 0.8),
("brilliant", 0.8, 0.7),
("incredible", 0.9, 0.8),
("love", 0.8, 0.7),
("success", 0.7, 0.6),
("solved", 0.7, 0.6),
("fixed", 0.6, 0.5),
("working", 0.5, 0.4),
("breakthrough", 0.9, 0.9),
("discovered", 0.7, 0.7),
] {
lex.insert(word.to_string(), (v, a));
}
// Negative / Low arousal
for (word, v, a) in [
("bad", -0.5, 0.3), ("wrong", -0.4, 0.3), ("slow", -0.3, 0.2),
("confusing", -0.4, 0.3), ("unclear", -0.3, 0.2), ("messy", -0.4, 0.3),
("annoying", -0.5, 0.4), ("boring", -0.3, 0.1), ("ugly", -0.5, 0.3),
("deprecated", -0.3, 0.2), ("stale", -0.3, 0.1),
("bad", -0.5, 0.3),
("wrong", -0.4, 0.3),
("slow", -0.3, 0.2),
("confusing", -0.4, 0.3),
("unclear", -0.3, 0.2),
("messy", -0.4, 0.3),
("annoying", -0.5, 0.4),
("boring", -0.3, 0.1),
("ugly", -0.5, 0.3),
("deprecated", -0.3, 0.2),
("stale", -0.3, 0.1),
] {
lex.insert(word.to_string(), (v, a));
}
// Negative / High arousal (bugs, errors, failures)
for (word, v, a) in [
("error", -0.6, 0.7), ("bug", -0.6, 0.6), ("crash", -0.8, 0.9),
("fail", -0.7, 0.7), ("failed", -0.7, 0.7), ("failure", -0.7, 0.7),
("broken", -0.7, 0.7), ("panic", -0.9, 0.9), ("fatal", -0.9, 0.9),
("critical", -0.5, 0.9), ("severe", -0.6, 0.8), ("urgent", -0.3, 0.9),
("emergency", -0.5, 0.9), ("vulnerability", -0.7, 0.8),
("exploit", -0.7, 0.8), ("leaked", -0.8, 0.9), ("compromised", -0.8, 0.9),
("timeout", -0.5, 0.6), ("deadlock", -0.7, 0.8), ("overflow", -0.6, 0.7),
("corruption", -0.8, 0.8), ("regression", -0.6, 0.7),
("blocker", -0.6, 0.8), ("outage", -0.8, 0.9), ("incident", -0.5, 0.7),
("error", -0.6, 0.7),
("bug", -0.6, 0.6),
("crash", -0.8, 0.9),
("fail", -0.7, 0.7),
("failed", -0.7, 0.7),
("failure", -0.7, 0.7),
("broken", -0.7, 0.7),
("panic", -0.9, 0.9),
("fatal", -0.9, 0.9),
("critical", -0.5, 0.9),
("severe", -0.6, 0.8),
("urgent", -0.3, 0.9),
("emergency", -0.5, 0.9),
("vulnerability", -0.7, 0.8),
("exploit", -0.7, 0.8),
("leaked", -0.8, 0.9),
("compromised", -0.8, 0.9),
("timeout", -0.5, 0.6),
("deadlock", -0.7, 0.8),
("overflow", -0.6, 0.7),
("corruption", -0.8, 0.8),
("regression", -0.6, 0.7),
("blocker", -0.6, 0.8),
("outage", -0.8, 0.9),
("incident", -0.5, 0.7),
] {
lex.insert(word.to_string(), (v, a));
}
// Surprise / Discovery
for (word, v, a) in [
("unexpected", 0.0, 0.7), ("surprising", 0.1, 0.7),
("strange", -0.1, 0.6), ("weird", -0.2, 0.5),
("interesting", 0.4, 0.6), ("curious", 0.3, 0.5),
("insight", 0.6, 0.7), ("realized", 0.4, 0.6),
("found", 0.3, 0.5), ("noticed", 0.2, 0.4),
("unexpected", 0.0, 0.7),
("surprising", 0.1, 0.7),
("strange", -0.1, 0.6),
("weird", -0.2, 0.5),
("interesting", 0.4, 0.6),
("curious", 0.3, 0.5),
("insight", 0.6, 0.7),
("realized", 0.4, 0.6),
("found", 0.3, 0.5),
("noticed", 0.2, 0.4),
] {
lex.insert(word.to_string(), (v, a));
}
// Technical intensity markers
for (word, v, a) in [
("production", -0.1, 0.7), ("deploy", 0.1, 0.6),
("migration", -0.1, 0.5), ("refactor", 0.1, 0.4),
("security", -0.1, 0.6), ("performance", 0.1, 0.4),
("important", 0.2, 0.6), ("remember", 0.1, 0.5),
("production", -0.1, 0.7),
("deploy", 0.1, 0.6),
("migration", -0.1, 0.5),
("refactor", 0.1, 0.4),
("security", -0.1, 0.6),
("performance", 0.1, 0.4),
("important", 0.2, 0.6),
("remember", 0.1, 0.5),
] {
lex.insert(word.to_string(), (v, a));
}
@ -572,16 +635,33 @@ mod tests {
fn test_positive_content() {
let mut em = EmotionalMemory::new();
let eval = em.evaluate_content("Amazing breakthrough! The fix is working perfectly");
assert!(eval.valence > 0.3, "Expected positive valence, got {}", eval.valence);
assert!(eval.arousal > 0.4, "Expected high arousal, got {}", eval.arousal);
assert!(
eval.valence > 0.3,
"Expected positive valence, got {}",
eval.valence
);
assert!(
eval.arousal > 0.4,
"Expected high arousal, got {}",
eval.arousal
);
}
#[test]
fn test_negative_content() {
let mut em = EmotionalMemory::new();
let eval = em.evaluate_content("Critical bug: production server crash with data corruption");
assert!(eval.valence < -0.3, "Expected negative valence, got {}", eval.valence);
assert!(eval.arousal > 0.5, "Expected high arousal, got {}", eval.arousal);
let eval =
em.evaluate_content("Critical bug: production server crash with data corruption");
assert!(
eval.valence < -0.3,
"Expected negative valence, got {}",
eval.valence
);
assert!(
eval.arousal > 0.5,
"Expected high arousal, got {}",
eval.arousal
);
}
#[test]
@ -592,7 +672,10 @@ mod tests {
0.8, // High novelty
0.9, // High arousal
);
assert!(eval.is_flashbulb, "Should detect flashbulb with high novelty + arousal");
assert!(
eval.is_flashbulb,
"Should detect flashbulb with high novelty + arousal"
);
}
#[test]
@ -611,7 +694,10 @@ mod tests {
let mut em = EmotionalMemory::new();
let positive = em.evaluate_content("This is amazing");
let negated = em.evaluate_content("This is not amazing");
assert!(negated.valence < positive.valence, "Negation should reduce valence");
assert!(
negated.valence < positive.valence,
"Negation should reduce valence"
);
}
#[test]
@ -632,15 +718,24 @@ mod tests {
em.evaluate_content("Great amazing perfect success");
}
let (mood_v, _) = em.current_mood();
assert!(mood_v > 0.3, "Mood should be positive after positive content");
assert!(
mood_v > 0.3,
"Mood should be positive after positive content"
);
// Positive memory should get boost
let boost = em.mood_congruence_boost(0.7);
assert!(boost > 0.0, "Positive memory should get mood-congruent boost");
assert!(
boost > 0.0,
"Positive memory should get mood-congruent boost"
);
// Negative memory should get less/no boost
let neg_boost = em.mood_congruence_boost(-0.7);
assert!(neg_boost < boost, "Negative memory should get less boost in positive mood");
assert!(
neg_boost < boost,
"Negative memory should get less boost in positive mood"
);
}
#[test]
@ -674,7 +769,10 @@ mod tests {
}
let (v1, a1) = em.current_mood();
assert!(v1 < 0.0, "Mood should be negative after negative content");
assert!(a1 > 0.3, "Arousal should be elevated after negative content");
assert!(
a1 > 0.3,
"Arousal should be elevated after negative content"
);
}
#[test]

View file

@ -1076,9 +1076,10 @@ impl ContentStore {
// Check cache first
let cache_key = self.cache_key(pointer);
if let Ok(cache) = self.cache.read()
&& let Some(data) = cache.get(&cache_key) {
return Ok(data.clone());
}
&& let Some(data) = cache.get(&cache_key)
{
return Ok(data.clone());
}
// Retrieve from storage
let data = match &pointer.storage_location {
@ -1131,22 +1132,23 @@ impl ContentStore {
}
if let Ok(mut cache) = self.cache.write()
&& let Ok(mut size) = self.current_cache_size.write() {
// Evict if necessary
while *size + data_size > self.max_cache_size && !cache.is_empty() {
// Simple eviction: remove first entry
if let Some(key_to_remove) = cache.keys().next().cloned() {
if let Some(removed) = cache.remove(&key_to_remove) {
*size = size.saturating_sub(removed.len());
}
} else {
break;
&& let Ok(mut size) = self.current_cache_size.write()
{
// Evict if necessary
while *size + data_size > self.max_cache_size && !cache.is_empty() {
// Simple eviction: remove first entry
if let Some(key_to_remove) = cache.keys().next().cloned() {
if let Some(removed) = cache.remove(&key_to_remove) {
*size = size.saturating_sub(removed.len());
}
} else {
break;
}
cache.insert(key.to_string(), data.to_vec());
*size += data_size;
}
cache.insert(key.to_string(), data.to_vec());
*size += data_size;
}
}
/// Retrieve from SQLite (placeholder - to be integrated with Storage)
@ -1393,15 +1395,16 @@ impl HippocampalIndex {
// Calculate semantic score
if let Some(ref query_embedding) = query.semantic_embedding
&& !index.semantic_summary.is_empty() {
let query_compressed = self.compress_embedding(query_embedding);
match_result.semantic_score =
self.cosine_similarity(&query_compressed, &index.semantic_summary);
&& !index.semantic_summary.is_empty()
{
let query_compressed = self.compress_embedding(query_embedding);
match_result.semantic_score =
self.cosine_similarity(&query_compressed, &index.semantic_summary);
if match_result.semantic_score < query.min_similarity {
continue;
}
if match_result.semantic_score < query.min_similarity {
continue;
}
}
// Calculate text score
if let Some(ref text_query) = query.text_query {
@ -1442,21 +1445,24 @@ impl HippocampalIndex {
fn passes_filters(&self, index: &MemoryIndex, query: &IndexQuery) -> bool {
// Time range filter
if let Some((start, end)) = query.time_range
&& (index.temporal_marker.created_at < start || index.temporal_marker.created_at > end) {
return false;
}
&& (index.temporal_marker.created_at < start || index.temporal_marker.created_at > end)
{
return false;
}
// Importance flags filter
if let Some(ref required) = query.required_flags
&& !index.matches_importance(required.to_bits()) {
return false;
}
&& !index.matches_importance(required.to_bits())
{
return false;
}
// Node type filter
if let Some(ref types) = query.node_types
&& !types.contains(&index.node_type) {
return false;
}
&& !types.contains(&index.node_type)
{
return false;
}
true
}
@ -1574,9 +1580,10 @@ impl HippocampalIndex {
for m in matches {
// Record access
if let Ok(mut indices) = self.indices.write()
&& let Some(index) = indices.get_mut(&m.index.memory_id) {
index.record_access();
}
&& let Some(index) = indices.get_mut(&m.index.memory_id)
{
index.record_access();
}
match self.retrieve_content(&m.index) {
Ok(memory) => memories.push(memory),
@ -1881,37 +1888,39 @@ impl HippocampalIndex {
) -> Result<MemoryBarcode> {
// Check if already indexed
if let Ok(indices) = self.indices.read()
&& indices.contains_key(node_id) {
return Err(HippocampalIndexError::MigrationError(
"Node already indexed".to_string(),
));
}
&& indices.contains_key(node_id)
{
return Err(HippocampalIndexError::MigrationError(
"Node already indexed".to_string(),
));
}
// Create the index
let barcode = self.index_memory(node_id, content, node_type, created_at, embedding)?;
// Update importance flags based on existing data
if let Ok(mut indices) = self.indices.write()
&& let Some(index) = indices.get_mut(node_id) {
// Set high retention flag if applicable
if retention_strength > 0.7 {
index.importance_flags.set_high_retention(true);
}
// Set emotional flag if applicable
if sentiment_magnitude > 0.5 {
index.importance_flags.set_emotional(true);
}
// Add SQLite content pointer
index.content_pointers.clear();
index.add_content_pointer(ContentPointer::sqlite(
"knowledge_nodes",
barcode.id as i64,
ContentType::Text,
));
&& let Some(index) = indices.get_mut(node_id)
{
// Set high retention flag if applicable
if retention_strength > 0.7 {
index.importance_flags.set_high_retention(true);
}
// Set emotional flag if applicable
if sentiment_magnitude > 0.5 {
index.importance_flags.set_emotional(true);
}
// Add SQLite content pointer
index.content_pointers.clear();
index.add_content_pointer(ContentPointer::sqlite(
"knowledge_nodes",
barcode.id as i64,
ContentType::Text,
));
}
Ok(barcode)
}

View file

@ -359,17 +359,18 @@ impl PredictionModel {
let ngrams = self.extract_ngrams(content);
if let Ok(mut patterns) = self.patterns.write()
&& let Ok(mut total) = self.total_count.write() {
for ngram in ngrams {
*patterns.entry(ngram).or_insert(0) += 1;
*total += 1;
}
// Prune if too large
if patterns.len() > MAX_PREDICTION_PATTERNS {
self.apply_decay(&mut patterns);
}
&& let Ok(mut total) = self.total_count.write()
{
for ngram in ngrams {
*patterns.entry(ngram).or_insert(0) += 1;
*total += 1;
}
// Prune if too large
if patterns.len() > MAX_PREDICTION_PATTERNS {
self.apply_decay(&mut patterns);
}
}
}
fn compute_prediction_error(&self, content: &str) -> f64 {
@ -1186,7 +1187,11 @@ impl RewardSignal {
// Limit pattern count
if patterns.len() > 1000 {
patterns.sort_by(|a, b| b.strength.partial_cmp(&a.strength).unwrap_or(std::cmp::Ordering::Equal));
patterns.sort_by(|a, b| {
b.strength
.partial_cmp(&a.strength)
.unwrap_or(std::cmp::Ordering::Equal)
});
patterns.truncate(500);
}
}
@ -1226,7 +1231,9 @@ impl RewardSignal {
entries.sort_by(|a, b| {
// Sort by score, then by recency
b.1.partial_cmp(&a.1).unwrap_or(std::cmp::Ordering::Equal).then_with(|| b.2.cmp(&a.2))
b.1.partial_cmp(&a.1)
.unwrap_or(std::cmp::Ordering::Equal)
.then_with(|| b.2.cmp(&a.2))
});
// Keep top entries

View file

@ -1267,13 +1267,14 @@ impl MemoryStateInfo {
}
MemoryState::Unavailable => {
if let Some(until) = lifecycle.suppression_until
&& until > now {
recommendations.push(format!(
"This memory is temporarily suppressed. \
&& until > now
{
recommendations.push(format!(
"This memory is temporarily suppressed. \
It will become accessible again after {}.",
until.format("%Y-%m-%d %H:%M UTC")
));
}
until.format("%Y-%m-%d %H:%M UTC")
));
}
}
MemoryState::Dormant => {
if duration_since_access.num_days() > 20 {

View file

@ -57,6 +57,7 @@
//! - Collins, A. M., & Loftus, E. F. (1975). A spreading-activation theory of semantic
//! processing. Psychological Review.
pub mod active_forgetting;
pub mod context_memory;
pub mod emotional_memory;
pub mod hippocampal_index;
@ -67,6 +68,12 @@ pub mod prospective_memory;
pub mod spreading_activation;
pub mod synaptic_tagging;
// Active forgetting — top-down inhibitory control (Anderson 2025 + Davis Rac1)
pub use active_forgetting::{
ActiveForgettingSystem, DEFAULT_CASCADE_DECAY, DEFAULT_LABILE_HOURS, DEFAULT_MAX_PENALTY,
DEFAULT_SIF_K, SuppressionStats,
};
// Re-exports for convenient access
pub use synaptic_tagging::{
// Results
@ -94,15 +101,23 @@ pub use context_memory::{
// Memory states (accessibility continuum)
pub use memory_states::{
// Constants
ACCESSIBILITY_ACTIVE,
ACCESSIBILITY_DORMANT,
ACCESSIBILITY_SILENT,
ACCESSIBILITY_UNAVAILABLE,
// Accessibility scoring
AccessibilityCalculator,
BatchUpdateResult,
COMPETITION_SIMILARITY_THRESHOLD,
CompetitionCandidate,
CompetitionConfig,
CompetitionEvent,
// Competition system (Retrieval-Induced Forgetting)
CompetitionManager,
CompetitionResult,
DEFAULT_ACTIVE_DECAY_HOURS,
DEFAULT_DORMANT_DECAY_DAYS,
LifecycleSummary,
MemoryLifecycle,
// Core types
@ -116,14 +131,6 @@ pub use memory_states::{
StateTransitionReason,
// State management
StateUpdateService,
// Constants
ACCESSIBILITY_ACTIVE,
ACCESSIBILITY_DORMANT,
ACCESSIBILITY_SILENT,
ACCESSIBILITY_UNAVAILABLE,
COMPETITION_SIMILARITY_THRESHOLD,
DEFAULT_ACTIVE_DECAY_HOURS,
DEFAULT_DORMANT_DECAY_DAYS,
};
// Multi-channel importance signaling (Neuromodulator-inspired)
@ -174,6 +181,8 @@ pub use hippocampal_index::{
HippocampalIndex,
HippocampalIndexConfig,
HippocampalIndexError,
// Constants
INDEX_EMBEDDING_DIM,
ImportanceFlags,
IndexLink,
IndexMatch,
@ -187,40 +196,39 @@ pub use hippocampal_index::{
MigrationResult,
StorageLocation,
TemporalMarker,
// Constants
INDEX_EMBEDDING_DIM,
};
// Predictive memory retrieval (Free Energy Principle - Friston, 2010)
pub use predictive_retrieval::{
// Backward-compatible aliases
ContextualPredictor,
Prediction,
PredictionConfidence,
PredictiveConfig,
PredictiveRetriever,
SequencePredictor,
TemporalPredictor,
// Enhanced types (Friston's Active Inference)
PredictedMemory,
Prediction,
PredictionConfidence,
PredictionOutcome,
PredictionReason,
PredictiveConfig,
PredictiveMemory,
PredictiveMemoryConfig,
PredictiveMemoryError,
PredictiveRetriever,
ProjectContext as PredictiveProjectContext,
QueryPattern,
SequencePredictor,
SessionContext as PredictiveSessionContext,
TemporalPatterns,
TemporalPredictor,
UserModel,
};
// Prospective memory (Einstein & McDaniel, 1990)
pub use prospective_memory::{
// Core engine
ProspectiveMemory,
ProspectiveMemoryConfig,
ProspectiveMemoryError,
// Context monitoring
Context as ProspectiveContext,
ContextMonitor,
// Triggers and patterns
ContextPattern,
// Intentions
Intention,
IntentionParser,
@ -229,13 +237,12 @@ pub use prospective_memory::{
IntentionStatus,
IntentionTrigger,
Priority,
// Triggers and patterns
ContextPattern,
// Core engine
ProspectiveMemory,
ProspectiveMemoryConfig,
ProspectiveMemoryError,
RecurrencePattern,
TriggerPattern,
// Context monitoring
Context as ProspectiveContext,
ContextMonitor,
};
// Spreading activation (Associative Memory Network - Collins & Loftus, 1975)

View file

@ -915,7 +915,11 @@ impl PredictiveMemory {
predictions.retain(|p| p.confidence >= self.config.min_confidence);
// Sort by confidence
predictions.sort_by(|a, b| b.confidence.partial_cmp(&a.confidence).unwrap_or(std::cmp::Ordering::Equal));
predictions.sort_by(|a, b| {
b.confidence
.partial_cmp(&a.confidence)
.unwrap_or(std::cmp::Ordering::Equal)
});
// Truncate to max
predictions.truncate(self.config.max_predictions);

View file

@ -130,8 +130,7 @@ pub type Result<T> = std::result::Result<T, ProspectiveMemoryError>;
// ============================================================================
/// Priority levels for intentions
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
#[derive(Default)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize, Default)]
pub enum Priority {
/// Low priority - nice to remember
Low = 1,
@ -144,7 +143,6 @@ pub enum Priority {
Critical = 4,
}
impl Priority {
/// Get numeric value for comparison
pub fn value(&self) -> u8 {
@ -178,8 +176,7 @@ impl Priority {
}
/// Status of an intention
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[derive(Default)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)]
pub enum IntentionStatus {
/// Intention is active and being monitored
#[default]
@ -196,7 +193,6 @@ pub enum IntentionStatus {
Snoozed,
}
/// Pattern for matching trigger conditions
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum TriggerPattern {
@ -695,15 +691,17 @@ impl Intention {
// Check snoozed
if let Some(snoozed_until) = self.snoozed_until
&& Utc::now() < snoozed_until {
return false;
}
&& Utc::now() < snoozed_until
{
return false;
}
// Check minimum interval
if let Some(last) = self.last_reminded_at
&& (Utc::now() - last) < Duration::minutes(MIN_REMINDER_INTERVAL_MINUTES) {
return false;
}
&& (Utc::now() - last) < Duration::minutes(MIN_REMINDER_INTERVAL_MINUTES)
{
return false;
}
true
}
@ -956,9 +954,17 @@ impl IntentionParser {
let when_char_idx = text_lower[..when_byte_idx].chars().count();
let content_part: String = if text_lower.starts_with("remind me to ") {
original.chars().skip(13).take(when_char_idx.saturating_sub(13)).collect()
original
.chars()
.skip(13)
.take(when_char_idx.saturating_sub(13))
.collect()
} else if text_lower.starts_with("remind me ") {
original.chars().skip(10).take(when_char_idx.saturating_sub(10)).collect()
original
.chars()
.skip(10)
.take(when_char_idx.saturating_sub(10))
.collect()
} else {
original.chars().take(when_char_idx).collect()
};
@ -1047,8 +1053,6 @@ impl IntentionParser {
/// Extract content from text, removing trigger keywords
fn extract_content(&self, _text_lower: &str, original: &str, keyword: &str) -> String {
original
.replace(keyword, "")
.replace(&keyword.to_uppercase(), "")
@ -1267,9 +1271,10 @@ impl ProspectiveMemory {
// Check if snoozed intention should wake
if intention.status == IntentionStatus::Snoozed
&& let Some(until) = intention.snoozed_until
&& Utc::now() >= until {
intention.wake();
}
&& Utc::now() >= until
{
intention.wake();
}
continue;
}
@ -1277,10 +1282,11 @@ impl ProspectiveMemory {
if intention
.trigger
.is_triggered(context, &context.recent_events)
&& intention.should_remind() {
intention.mark_triggered();
triggered.push(intention.clone());
}
&& intention.should_remind()
{
intention.mark_triggered();
triggered.push(intention.clone());
}
// Check for deadline escalation
if self.config.enable_escalation {

View file

@ -57,7 +57,6 @@ pub enum LinkType {
UserDefined,
}
// ============================================================================
// ASSOCIATION EDGE
// ============================================================================
@ -271,13 +270,7 @@ impl ActivationNetwork {
}
/// Add an edge between two nodes
pub fn add_edge(
&mut self,
source: String,
target: String,
link_type: LinkType,
strength: f64,
) {
pub fn add_edge(&mut self, source: String, target: String, link_type: LinkType, strength: f64) {
// Ensure both nodes exist
self.add_node(source.clone());
self.add_node(target.clone());
@ -288,9 +281,10 @@ impl ActivationNetwork {
// Update node's edge list
if let Some(node) = self.nodes.get_mut(&source)
&& !node.edges.contains(&target) {
node.edges.push(target);
}
&& !node.edges.contains(&target)
{
node.edges.push(target);
}
}
/// Activate a node and spread activation through the network
@ -314,9 +308,10 @@ impl ActivationNetwork {
while let Some((current_id, current_activation, hops, path)) = queue.pop() {
// Skip if we've visited this node with higher activation
if let Some(&prev_activation) = visited.get(&current_id)
&& prev_activation >= current_activation {
continue;
}
&& prev_activation >= current_activation
{
continue;
}
visited.insert(current_id.clone(), current_activation);
// Check hop limit
@ -499,7 +494,7 @@ mod tests {
#[test]
fn test_activation_threshold() {
let mut network = ActivationNetwork::with_config(ActivationConfig {
decay_factor: 0.1, // Very high decay
decay_factor: 0.1, // Very high decay
min_threshold: 0.5, // High threshold
..Default::default()
});

View file

@ -122,7 +122,6 @@ pub enum DecayFunction {
Logarithmic,
}
impl DecayFunction {
/// Calculate decayed strength
///

View file

@ -43,8 +43,10 @@ pub fn classify_intent(query: &str) -> QueryIntent {
if lower.contains("how to") || lower.starts_with("how do") || lower.starts_with("steps") {
return QueryIntent::HowTo;
}
if lower.starts_with("what is") || lower.starts_with("what are")
|| lower.starts_with("define") || lower.starts_with("explain")
if lower.starts_with("what is")
|| lower.starts_with("what are")
|| lower.starts_with("define")
|| lower.starts_with("explain")
{
return QueryIntent::Definition;
}
@ -54,8 +56,11 @@ pub fn classify_intent(query: &str) -> QueryIntent {
if lower.starts_with("when") || lower.contains("date") || lower.contains("timeline") {
return QueryIntent::Temporal;
}
if query.contains('(') || query.contains('{') || query.contains("fn ")
|| query.contains("class ") || query.contains("::")
if query.contains('(')
|| query.contains('{')
|| query.contains("fn ")
|| query.contains("class ")
|| query.contains("::")
{
return QueryIntent::Technical;
}
@ -161,23 +166,38 @@ mod tests {
#[test]
fn test_classify_definition() {
assert_eq!(classify_intent("What is FSRS?"), QueryIntent::Definition);
assert_eq!(classify_intent("explain spaced repetition"), QueryIntent::Definition);
assert_eq!(
classify_intent("explain spaced repetition"),
QueryIntent::Definition
);
}
#[test]
fn test_classify_howto() {
assert_eq!(classify_intent("how to configure embeddings"), QueryIntent::HowTo);
assert_eq!(classify_intent("How do I search memories?"), QueryIntent::HowTo);
assert_eq!(
classify_intent("how to configure embeddings"),
QueryIntent::HowTo
);
assert_eq!(
classify_intent("How do I search memories?"),
QueryIntent::HowTo
);
}
#[test]
fn test_classify_reasoning() {
assert_eq!(classify_intent("why does retention decay?"), QueryIntent::Reasoning);
assert_eq!(
classify_intent("why does retention decay?"),
QueryIntent::Reasoning
);
}
#[test]
fn test_classify_temporal() {
assert_eq!(classify_intent("when did the last consolidation run"), QueryIntent::Temporal);
assert_eq!(
classify_intent("when did the last consolidation run"),
QueryIntent::Temporal
);
}
#[test]
@ -188,7 +208,10 @@ mod tests {
#[test]
fn test_classify_lookup() {
assert_eq!(classify_intent("vestige memory system"), QueryIntent::Lookup);
assert_eq!(
classify_intent("vestige memory system"),
QueryIntent::Lookup
);
}
#[test]
@ -200,10 +223,7 @@ mod tests {
#[test]
fn test_centroid_embedding() {
let embeddings = vec![
vec![1.0, 0.0, 0.0],
vec![0.0, 1.0, 0.0],
];
let embeddings = vec![vec![1.0, 0.0, 0.0], vec![0.0, 1.0, 0.0]];
let centroid = centroid_embedding(&embeddings);
assert_eq!(centroid.len(), 3);
// Should be normalized

View file

@ -15,21 +15,21 @@ mod temporal;
mod vector;
pub use vector::{
VectorIndex, VectorIndexConfig, VectorIndexStats, VectorSearchError, DEFAULT_CONNECTIVITY,
DEFAULT_DIMENSIONS,
DEFAULT_CONNECTIVITY, DEFAULT_DIMENSIONS, VectorIndex, VectorIndexConfig, VectorIndexStats,
VectorSearchError,
};
pub use keyword::{sanitize_fts5_query, KeywordSearcher};
pub use keyword::{KeywordSearcher, sanitize_fts5_query};
pub use hybrid::{linear_combination, reciprocal_rank_fusion, HybridSearchConfig, HybridSearcher};
pub use hybrid::{HybridSearchConfig, HybridSearcher, linear_combination, reciprocal_rank_fusion};
pub use temporal::TemporalSearcher;
// GOD TIER 2026: Reranking for +15-20% precision
pub use reranker::{
Reranker, RerankerConfig, RerankerError, RerankedResult,
DEFAULT_RERANK_COUNT, DEFAULT_RETRIEVAL_COUNT,
DEFAULT_RERANK_COUNT, DEFAULT_RETRIEVAL_COUNT, RerankedResult, Reranker, RerankerConfig,
RerankerError,
};
// v2.0: HyDE-inspired query expansion for improved semantic search
pub use hyde::{classify_intent, expand_query, centroid_embedding, QueryIntent};
pub use hyde::{QueryIntent, centroid_embedding, classify_intent, expand_query};

View file

@ -174,9 +174,9 @@ impl VectorIndex {
/// Reserve capacity for a specified number of vectors
/// This should be called before adding vectors to avoid segmentation faults
pub fn reserve(&self, capacity: usize) -> Result<(), VectorSearchError> {
self.index
.reserve(capacity)
.map_err(|e| VectorSearchError::IndexCreation(format!("Failed to reserve capacity: {}", e)))
self.index.reserve(capacity).map_err(|e| {
VectorSearchError::IndexCreation(format!("Failed to reserve capacity: {}", e))
})
}
/// Add a vector with a string key

View file

@ -49,6 +49,11 @@ pub const MIGRATIONS: &[Migration] = &[
description: "v2.0.0 Cognitive Leap: emotional memory, flashbulb encoding, temporal hierarchy",
up: MIGRATION_V9_UP,
},
Migration {
version: 10,
description: "v2.0.5 Intentional Amnesia: active forgetting — top-down suppression (Anderson 2025 + Davis Rac1)",
up: MIGRATION_V10_UP,
},
];
/// A database migration
@ -315,7 +320,7 @@ const MIGRATION_V4_UP: &str = r#"
-- TEMPORAL KNOWLEDGE GRAPH (Like Zep's Graphiti)
-- ============================================================================
-- DEPRECATED (v2.1.0): knowledge_edges is unused. All graph edges use
-- DEPRECATED (v2.0.5): knowledge_edges is unused. All graph edges use
-- memory_connections (migration V3). This table was designed for bi-temporal
-- edge support but was never wired. Retained for schema compatibility with
-- existing databases. Do NOT add queries against this table.
@ -608,6 +613,41 @@ ALTER TABLE dream_history ADD COLUMN creative_connections_found INTEGER DEFAULT
UPDATE schema_version SET version = 9, applied_at = datetime('now');
"#;
/// V10: v2.0.5 Intentional Amnesia — Top-Down Active Forgetting
///
/// Adds columns to `knowledge_nodes` for user-initiated suppression distinct
/// from passive FSRS decay and from bottom-up retrieval-induced forgetting
/// (which lives on `memory_states.suppression_until`). These columns are
/// incremented by the `suppress` MCP tool (tool #24) and consumed by the
/// search scoring stage + background Rac1 cascade worker.
///
/// References:
/// - Anderson et al. (2025). Brain mechanisms underlying the inhibitory
/// control of thought. Nat Rev Neurosci. DOI 10.1038/s41583-025-00929-y
/// - Cervantes-Sandoval & Davis (2020). Rac1 Impairs Forgetting-Induced
/// Cellular Plasticity. Front Cell Neurosci. PMC7477079
const MIGRATION_V10_UP: &str = r#"
-- Top-down suppression count (Suppression-Induced Forgetting, Anderson 2025).
-- Compounds with each `suppress` call, saturates via the k × count formula
-- in active_forgetting::retrieval_penalty().
ALTER TABLE knowledge_nodes ADD COLUMN suppression_count INTEGER DEFAULT 0;
-- Timestamp of the most recent suppression. Used for the 24h labile window
-- (reversal is allowed only while (now - suppressed_at) < labile_hours).
ALTER TABLE knowledge_nodes ADD COLUMN suppressed_at TEXT;
-- Partial indices only materialise rows actually involved in suppression.
CREATE INDEX IF NOT EXISTS idx_nodes_suppression_count
ON knowledge_nodes(suppression_count)
WHERE suppression_count > 0;
CREATE INDEX IF NOT EXISTS idx_nodes_suppressed_at
ON knowledge_nodes(suppressed_at)
WHERE suppressed_at IS NOT NULL;
UPDATE schema_version SET version = 10, applied_at = datetime('now');
"#;
/// Get current schema version from database
pub fn get_current_version(conn: &rusqlite::Connection) -> rusqlite::Result<u32> {
conn.query_row(

File diff suppressed because it is too large Load diff