feat(v2.0.5): Intentional Amnesia — active forgetting via top-down inhibitory control

First AI memory system to model forgetting as a neuroscience-grounded
PROCESS rather than passive decay. Adds the `suppress` MCP tool (#24),
Rac1 cascade worker, migration V10, and dashboard forgetting indicators.

Based on:
- Anderson, Hanslmayr & Quaegebeur (2025), Nat Rev Neurosci — right
  lateral PFC as the domain-general inhibitory controller; SIF
  compounds with each stopping attempt.
- Cervantes-Sandoval et al. (2020), Front Cell Neurosci PMC7477079 —
  Rac1 GTPase as the active synaptic destabilization mechanism.

What's new:
* `suppress` MCP tool — each call compounds `suppression_count` and
  subtracts a `0.15 × count` penalty (saturating at 80%) from
  retrieval scores during hybrid search. Distinct from delete
  (removes) and demote (one-shot).
* Rac1 cascade worker — background sweep piggybacks the 6h
  consolidation loop, walks `memory_connections` edges from
  recently-suppressed seeds, applies attenuated FSRS decay to
  co-activated neighbors. You don't just forget Jake — you fade
  the café, the roommate, the birthday.
* 24h labile window — reversible via `suppress({id, reverse: true})`
  within 24 hours. Matches Nader reconsolidation semantics.
* Migration V10 — additive-only (`suppression_count`, `suppressed_at`
  + partial indices). All v2.0.x DBs upgrade seamlessly on first launch.
* Dashboard: `ForgettingIndicator.svelte` pulses when suppressions
  are active. 3D graph nodes dim to 20% opacity when suppressed.
  New WebSocket events: `MemorySuppressed`, `MemoryUnsuppressed`,
  `Rac1CascadeSwept`. Heartbeat carries `suppressed_count`.
* Search pipeline: SIF penalty inserted into the accessibility stage
  so it stacks on top of passive FSRS decay.
* Tool count bumped 23 → 24. Cognitive modules 29 → 30.

Memories persist — they are INHIBITED, not erased. `memory.get(id)`
returns full content through any number of suppressions. The 24h
labile window is a grace period for regret.

Also fixes issue #31 (dashboard graph view buggy) as a companion UI
bug discovered during the v2.0.5 audit cycle:

* Root cause: node glow `SpriteMaterial` had no `map`, so
  `THREE.Sprite` rendered as a solid-coloured 1×1 plane. Additive
  blending + `UnrealBloomPass(0.8, 0.4, 0.85)` amplified the square
  edges into hard-edged glowing cubes.
* Fix: shared 128×128 radial-gradient `CanvasTexture` singleton used
  as the sprite map. Retuned bloom to `(0.55, 0.6, 0.2)`. Halved fog
  density (0.008 → 0.0035). Edges bumped from dark navy `0x4a4a7a`
  to brand violet `0x8b5cf6` with higher opacity. Added explicit
  `scene.background` and a 2000-point starfield for depth.
* 21 regression tests added in `ui-fixes.test.ts` locking every
  invariant in (shared texture singleton, depthWrite:false, scale
  ×6, bloom magic numbers via source regex, starfield presence).

Tests: 1,284 Rust (+47) + 171 Vitest (+21) = 1,455 total, 0 failed
Clippy: clean across all targets, zero warnings
Release binary: 22.6MB, `cargo build --release -p vestige-mcp` green
Versions: workspace aligned at 2.0.5 across all 6 crates/packages

Closes #31
This commit is contained in:
Sam Valladares 2026-04-14 17:30:30 -05:00
parent 95bde93b49
commit 8178beb961
359 changed files with 8277 additions and 3416 deletions

View file

@ -18,7 +18,9 @@ use vestige_core::{IngestInput, Storage};
#[command(author = "samvallad33")]
#[command(version = env!("CARGO_PKG_VERSION"))]
#[command(about = "CLI for the Vestige cognitive memory system")]
#[command(long_about = "Vestige is a cognitive memory system based on 130 years of memory research.\n\nIt implements FSRS-6, spreading activation, synaptic tagging, and more.")]
#[command(
long_about = "Vestige is a cognitive memory system based on 130 years of memory research.\n\nIt implements FSRS-6, spreading activation, synaptic tagging, and more."
)]
struct Cli {
#[command(subcommand)]
command: Commands,
@ -171,21 +173,49 @@ fn run_stats(show_tagging: bool, show_states: bool) -> anyhow::Result<()> {
// Basic stats
println!("{}: {}", "Total Memories".white().bold(), stats.total_nodes);
println!("{}: {}", "Due for Review".white().bold(), stats.nodes_due_for_review);
println!("{}: {:.1}%", "Average Retention".white().bold(), stats.average_retention * 100.0);
println!("{}: {:.2}", "Average Storage Strength".white().bold(), stats.average_storage_strength);
println!("{}: {:.2}", "Average Retrieval Strength".white().bold(), stats.average_retrieval_strength);
println!("{}: {}", "With Embeddings".white().bold(), stats.nodes_with_embeddings);
println!(
"{}: {}",
"Due for Review".white().bold(),
stats.nodes_due_for_review
);
println!(
"{}: {:.1}%",
"Average Retention".white().bold(),
stats.average_retention * 100.0
);
println!(
"{}: {:.2}",
"Average Storage Strength".white().bold(),
stats.average_storage_strength
);
println!(
"{}: {:.2}",
"Average Retrieval Strength".white().bold(),
stats.average_retrieval_strength
);
println!(
"{}: {}",
"With Embeddings".white().bold(),
stats.nodes_with_embeddings
);
if let Some(model) = &stats.embedding_model {
println!("{}: {}", "Embedding Model".white().bold(), model);
}
if let Some(oldest) = stats.oldest_memory {
println!("{}: {}", "Oldest Memory".white().bold(), oldest.format("%Y-%m-%d %H:%M:%S"));
println!(
"{}: {}",
"Oldest Memory".white().bold(),
oldest.format("%Y-%m-%d %H:%M:%S")
);
}
if let Some(newest) = stats.newest_memory {
println!("{}: {}", "Newest Memory".white().bold(), newest.format("%Y-%m-%d %H:%M:%S"));
println!(
"{}: {}",
"Newest Memory".white().bold(),
newest.format("%Y-%m-%d %H:%M:%S")
);
}
// Embedding coverage
@ -194,7 +224,11 @@ fn run_stats(show_tagging: bool, show_states: bool) -> anyhow::Result<()> {
} else {
0.0
};
println!("{}: {:.1}%", "Embedding Coverage".white().bold(), embedding_coverage);
println!(
"{}: {:.1}%",
"Embedding Coverage".white().bold(),
embedding_coverage
);
// Tagging distribution (retention levels)
if show_tagging {
@ -205,9 +239,18 @@ fn run_stats(show_tagging: bool, show_states: bool) -> anyhow::Result<()> {
let total = memories.len();
if total > 0 {
let high = memories.iter().filter(|m| m.retention_strength >= 0.7).count();
let medium = memories.iter().filter(|m| m.retention_strength >= 0.4 && m.retention_strength < 0.7).count();
let low = memories.iter().filter(|m| m.retention_strength < 0.4).count();
let high = memories
.iter()
.filter(|m| m.retention_strength >= 0.7)
.count();
let medium = memories
.iter()
.filter(|m| m.retention_strength >= 0.4 && m.retention_strength < 0.7)
.count();
let low = memories
.iter()
.filter(|m| m.retention_strength < 0.4)
.count();
print_distribution_bar("High (>=70%)", high, total, "green");
print_distribution_bar("Medium (40-70%)", medium, total, "yellow");
@ -220,7 +263,10 @@ fn run_stats(show_tagging: bool, show_states: bool) -> anyhow::Result<()> {
// State distribution
if show_states {
println!();
println!("{}", "=== Cognitive State Distribution ===".magenta().bold());
println!(
"{}",
"=== Cognitive State Distribution ===".magenta().bold()
);
let memories = storage.get_all_nodes(500, 0)?;
let total = memories.len();
@ -248,7 +294,9 @@ fn run_stats(show_tagging: bool, show_states: bool) -> anyhow::Result<()> {
}
/// Compute cognitive state distribution for memories
fn compute_state_distribution(memories: &[vestige_core::KnowledgeNode]) -> (usize, usize, usize, usize) {
fn compute_state_distribution(
memories: &[vestige_core::KnowledgeNode],
) -> (usize, usize, usize, usize) {
let mut active = 0;
let mut dormant = 0;
let mut silent = 0;
@ -297,10 +345,7 @@ fn print_distribution_bar(label: &str, count: usize, total: usize, color: &str)
println!(
" {:15} [{:30}] {:>4} ({:>5.1}%)",
label,
colored_bar,
count,
percentage
label, colored_bar, count, percentage
);
}
@ -332,8 +377,16 @@ fn run_health() -> anyhow::Result<()> {
println!("{}: {}", "Status".white().bold(), colored_status);
println!("{}: {}", "Total Memories".white(), stats.total_nodes);
println!("{}: {}", "Due for Review".white(), stats.nodes_due_for_review);
println!("{}: {:.1}%", "Average Retention".white(), stats.average_retention * 100.0);
println!(
"{}: {}",
"Due for Review".white(),
stats.nodes_due_for_review
);
println!(
"{}: {:.1}%",
"Average Retention".white(),
stats.average_retention * 100.0
);
// Embedding coverage
let embedding_coverage = if stats.total_nodes > 0 {
@ -341,15 +394,27 @@ fn run_health() -> anyhow::Result<()> {
} else {
0.0
};
println!("{}: {:.1}%", "Embedding Coverage".white(), embedding_coverage);
println!("{}: {}", "Embedding Service".white(),
if storage.is_embedding_ready() { "Ready".green() } else { "Not Ready".red() });
println!(
"{}: {:.1}%",
"Embedding Coverage".white(),
embedding_coverage
);
println!(
"{}: {}",
"Embedding Service".white(),
if storage.is_embedding_ready() {
"Ready".green()
} else {
"Not Ready".red()
}
);
// Warnings
let mut warnings = Vec::new();
if stats.average_retention < 0.5 && stats.total_nodes > 0 {
warnings.push("Low average retention - consider running consolidation or reviewing memories");
warnings
.push("Low average retention - consider running consolidation or reviewing memories");
}
if stats.nodes_due_for_review > 10 {
@ -376,7 +441,8 @@ fn run_health() -> anyhow::Result<()> {
let mut recommendations = Vec::new();
if status == "CRITICAL" {
recommendations.push("CRITICAL: Many memories have very low retention. Review important memories.");
recommendations
.push("CRITICAL: Many memories have very low retention. Review important memories.");
}
if stats.nodes_due_for_review > 5 {
@ -384,7 +450,8 @@ fn run_health() -> anyhow::Result<()> {
}
if stats.nodes_with_embeddings < stats.total_nodes {
recommendations.push("Run 'vestige consolidate' to generate embeddings for better semantic search.");
recommendations
.push("Run 'vestige consolidate' to generate embeddings for better semantic search.");
}
if stats.total_nodes > 100 && stats.average_retention < 0.7 {
@ -398,8 +465,16 @@ fn run_health() -> anyhow::Result<()> {
println!();
println!("{}", "Recommendations:".cyan().bold());
for rec in &recommendations {
let icon = if rec.starts_with("CRITICAL") { "!".red().bold() } else { ">".cyan() };
let text = if rec.starts_with("CRITICAL") { rec.red().to_string() } else { rec.to_string() };
let icon = if rec.starts_with("CRITICAL") {
"!".red().bold()
} else {
">".cyan()
};
let text = if rec.starts_with("CRITICAL") {
rec.red().to_string()
} else {
rec.to_string()
};
println!(" {} {}", icon, text);
}
@ -416,11 +491,27 @@ fn run_consolidate() -> anyhow::Result<()> {
let storage = Storage::new(None)?;
let result = storage.run_consolidation()?;
println!("{}: {}", "Nodes Processed".white().bold(), result.nodes_processed);
println!("{}: {}", "Nodes Promoted".white().bold(), result.nodes_promoted);
println!(
"{}: {}",
"Nodes Processed".white().bold(),
result.nodes_processed
);
println!(
"{}: {}",
"Nodes Promoted".white().bold(),
result.nodes_promoted
);
println!("{}: {}", "Nodes Pruned".white().bold(), result.nodes_pruned);
println!("{}: {}", "Decay Applied".white().bold(), result.decay_applied);
println!("{}: {}", "Embeddings Generated".white().bold(), result.embeddings_generated);
println!(
"{}: {}",
"Decay Applied".white().bold(),
result.decay_applied
);
println!(
"{}: {}",
"Embeddings Generated".white().bold(),
result.embeddings_generated
);
println!("{}: {}ms", "Duration".white().bold(), result.duration_ms);
println!();
@ -523,7 +614,11 @@ fn run_restore(backup_path: PathBuf) -> anyhow::Result<()> {
let stats = storage.get_stats()?;
println!();
println!("{}: {}", "Total Nodes".white(), stats.total_nodes);
println!("{}: {}", "With Embeddings".white(), stats.nodes_with_embeddings);
println!(
"{}: {}",
"With Embeddings".white(),
stats.nodes_with_embeddings
);
Ok(())
}
@ -581,9 +676,10 @@ fn run_backup(output: PathBuf) -> anyhow::Result<()> {
// Create parent directories if needed
if let Some(parent) = output.parent()
&& !parent.exists() {
std::fs::create_dir_all(parent)?;
}
&& !parent.exists()
{
std::fs::create_dir_all(parent)?;
}
// Copy the database file
println!("Copying database...");
@ -630,8 +726,9 @@ fn run_export(
// Parse since date if provided
let since_date = match &since {
Some(date_str) => {
let naive = NaiveDate::parse_from_str(date_str, "%Y-%m-%d")
.map_err(|e| anyhow::anyhow!("Invalid date '{}': {}. Use YYYY-MM-DD format.", date_str, e))?;
let naive = NaiveDate::parse_from_str(date_str, "%Y-%m-%d").map_err(|e| {
anyhow::anyhow!("Invalid date '{}': {}. Use YYYY-MM-DD format.", date_str, e)
})?;
Some(
naive
.and_hms_opt(0, 0, 0)
@ -645,7 +742,12 @@ fn run_export(
// Parse tags filter
let tag_filter: Vec<String> = tags
.as_deref()
.map(|t| t.split(',').map(|s| s.trim().to_string()).filter(|s| !s.is_empty()).collect())
.map(|t| {
t.split(',')
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect()
})
.unwrap_or_default();
let storage = Storage::new(None)?;
@ -657,9 +759,10 @@ fn run_export(
.filter(|node| {
// Date filter
if let Some(ref since_dt) = since_date
&& node.created_at < *since_dt {
return false;
}
&& node.created_at < *since_dt
{
return false;
}
// Tag filter: node must contain ALL specified tags
if !tag_filter.is_empty() {
for tag in &tag_filter {
@ -689,9 +792,10 @@ fn run_export(
// Create parent directories if needed
if let Some(parent) = output.parent()
&& !parent.exists() {
std::fs::create_dir_all(parent)?;
}
&& !parent.exists()
{
std::fs::create_dir_all(parent)?;
}
let file = std::fs::File::create(&output)?;
let mut writer = BufWriter::new(file);
@ -770,7 +874,11 @@ fn run_gc(
})
.collect();
println!("{}: {}", "Min retention threshold".white().bold(), min_retention);
println!(
"{}: {}",
"Min retention threshold".white().bold(),
min_retention
);
if let Some(max_days) = max_age_days {
println!("{}: {} days", "Max age".white().bold(), max_days);
}
@ -783,7 +891,10 @@ fn run_gc(
if candidates.is_empty() {
println!();
println!("{}", "No memories match the garbage collection criteria.".green());
println!(
"{}",
"No memories match the garbage collection criteria.".green()
);
return Ok(());
}
@ -853,7 +964,12 @@ fn run_gc(
Ok(true) => deleted += 1,
Ok(false) => errors += 1, // node was already gone
Err(e) => {
eprintln!(" {} Failed to delete {}: {}", "ERR".red(), &node.id[..8], e);
eprintln!(
" {} Failed to delete {}: {}",
"ERR".red(),
&node.id[..8],
e
);
errors += 1;
}
}
@ -960,7 +1076,10 @@ fn run_ingest(
fn run_dashboard(port: u16, open_browser: bool) -> anyhow::Result<()> {
println!("{}", "=== Vestige Dashboard ===".cyan().bold());
println!();
println!("Starting dashboard at {}...", format!("http://127.0.0.1:{}", port).cyan());
println!(
"Starting dashboard at {}...",
format!("http://127.0.0.1:{}", port).cyan()
);
let storage = Storage::new(None)?;
@ -1025,8 +1144,19 @@ fn run_serve(port: u16, with_dashboard: bool, dashboard_port: u16) -> anyhow::Re
let dc = Arc::clone(&cognitive);
let dtx = event_tx.clone();
tokio::spawn(async move {
match vestige_mcp::dashboard::start_background_with_event_tx(ds, Some(dc), dtx, dashboard_port).await {
Ok(_) => println!(" {} Dashboard: http://127.0.0.1:{}", ">".cyan(), dashboard_port),
match vestige_mcp::dashboard::start_background_with_event_tx(
ds,
Some(dc),
dtx,
dashboard_port,
)
.await
{
Ok(_) => println!(
" {} Dashboard: http://127.0.0.1:{}",
">".cyan(),
dashboard_port
),
Err(e) => eprintln!(" {} Dashboard failed: {}", "!".yellow(), e),
}
});
@ -1037,7 +1167,12 @@ fn run_serve(port: u16, with_dashboard: bool, dashboard_port: u16) -> anyhow::Re
.map_err(|e| anyhow::anyhow!("Failed to create auth token: {}", e))?;
let bind = std::env::var("VESTIGE_HTTP_BIND").unwrap_or_else(|_| "127.0.0.1".to_string());
println!(" {} HTTP transport: http://{}:{}/mcp", ">".cyan(), bind, port);
println!(
" {} HTTP transport: http://{}:{}/mcp",
">".cyan(),
bind,
port
);
println!(" {} Auth token: {}...", ">".cyan(), &token[..8]);
println!();
println!("{}", "Press Ctrl+C to stop.".dimmed());

View file

@ -65,7 +65,12 @@ fn main() -> anyhow::Result<()> {
match storage.ingest(input) {
Ok(_node) => {
success_count += 1;
println!("[{}/{}] OK: {}", i + 1, total, truncate(&memory.content, 60));
println!(
"[{}/{}] OK: {}",
i + 1,
total,
truncate(&memory.content, 60)
);
}
Err(e) => {
println!("[{}/{}] FAIL: {}", i + 1, total, e);
@ -73,7 +78,10 @@ fn main() -> anyhow::Result<()> {
}
}
println!("\nRestore complete: {}/{} memories restored", success_count, total);
println!(
"\nRestore complete: {}/{} memories restored",
success_count, total
);
// Show stats
let stats = storage.get_stats()?;

View file

@ -4,24 +4,43 @@
//! Each module is initialized once at startup and shared via Arc<Mutex<>>
//! across all tool invocations.
use vestige_core::neuroscience::predictive_retrieval::PredictiveMemory;
use vestige_core::neuroscience::prospective_memory::{IntentionParser, ProspectiveMemory};
use vestige_core::search::TemporalSearcher;
use vestige_core::{
AccessibilityCalculator,
// Neuroscience modules
ActivationNetwork, SynapticTaggingSystem, HippocampalIndex, ContextMatcher,
AccessibilityCalculator, CompetitionManager, StateUpdateService,
ImportanceSignals, NoveltySignal, ArousalSignal, RewardSignal, AttentionSignal,
EmotionalMemory, LinkType,
ActivationNetwork,
ActivityTracker,
AdaptiveEmbedder,
ArousalSignal,
AttentionSignal,
CompetitionManager,
ConsolidationScheduler,
ContextMatcher,
CrossProjectLearner,
EmotionalMemory,
HippocampalIndex,
ImportanceSignals,
// Advanced modules
ImportanceTracker, ReconsolidationManager, IntentDetector, ActivityTracker,
MemoryDreamer, MemoryChainBuilder, MemoryCompressor, CrossProjectLearner,
AdaptiveEmbedder, SpeculativeRetriever, ConsolidationScheduler,
ImportanceTracker,
IntentDetector,
LinkType,
MemoryChainBuilder,
MemoryCompressor,
MemoryDreamer,
NoveltySignal,
ReconsolidationManager,
// Search modules
Reranker, RerankerConfig,
Reranker,
RerankerConfig,
RewardSignal,
SpeculativeRetriever,
StateUpdateService,
// Storage
Storage,
SynapticTaggingSystem,
};
use vestige_core::search::TemporalSearcher;
use vestige_core::neuroscience::predictive_retrieval::PredictiveMemory;
use vestige_core::neuroscience::prospective_memory::{ProspectiveMemory, IntentionParser};
/// Stateful cognitive engine holding all neuroscience modules.
///
@ -151,9 +170,9 @@ impl CognitiveEngine {
#[cfg(test)]
mod tests {
use super::*;
use vestige_core::{ConnectionRecord, IngestInput};
use chrono::Utc;
use tempfile::TempDir;
use vestige_core::{ConnectionRecord, IngestInput};
fn create_test_storage() -> (Storage, TempDir) {
let dir = TempDir::new().unwrap();
@ -162,16 +181,18 @@ mod tests {
}
fn ingest_memory(storage: &Storage, content: &str) -> String {
let result = storage.ingest(IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
}).unwrap();
let result = storage
.ingest(IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap();
result.id
}
@ -195,15 +216,17 @@ mod tests {
// Save a connection between them
let now = Utc::now();
storage.save_connection(&ConnectionRecord {
source_id: id1.clone(),
target_id: id2.clone(),
strength: 0.85,
link_type: "semantic".to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
}).unwrap();
storage
.save_connection(&ConnectionRecord {
source_id: id1.clone(),
target_id: id2.clone(),
strength: 0.85,
link_type: "semantic".to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
})
.unwrap();
// Hydrate engine
let mut engine = CognitiveEngine::new();
@ -211,7 +234,11 @@ mod tests {
// Verify activation network has the connection
let assocs = engine.activation_network.get_associations(&id1);
assert!(!assocs.is_empty(), "Hydrated engine should have associations for {}", id1);
assert!(
!assocs.is_empty(),
"Hydrated engine should have associations for {}",
id1
);
assert!(
assocs.iter().any(|a| a.memory_id == id2),
"Should find connection to {}",
@ -228,29 +255,37 @@ mod tests {
let id3 = ingest_memory(&storage, "Event C was caused by A");
let now = Utc::now();
storage.save_connection(&ConnectionRecord {
source_id: id1.clone(),
target_id: id2.clone(),
strength: 0.7,
link_type: "temporal".to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
}).unwrap();
storage.save_connection(&ConnectionRecord {
source_id: id1.clone(),
target_id: id3.clone(),
strength: 0.9,
link_type: "causal".to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
}).unwrap();
storage
.save_connection(&ConnectionRecord {
source_id: id1.clone(),
target_id: id2.clone(),
strength: 0.7,
link_type: "temporal".to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
})
.unwrap();
storage
.save_connection(&ConnectionRecord {
source_id: id1.clone(),
target_id: id3.clone(),
strength: 0.9,
link_type: "causal".to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
})
.unwrap();
let mut engine = CognitiveEngine::new();
engine.hydrate(&storage);
let assocs = engine.activation_network.get_associations(&id1);
assert!(assocs.len() >= 2, "Should have at least 2 associations, got {}", assocs.len());
assert!(
assocs.len() >= 2,
"Should have at least 2 associations, got {}",
assocs.len()
);
}
}

View file

@ -38,6 +38,24 @@ pub enum VestigeEvent {
new_retention: f64,
timestamp: DateTime<Utc>,
},
// v2.0.5: Active forgetting — top-down suppression (Anderson 2025 + Davis Rac1)
MemorySuppressed {
id: String,
suppression_count: i32,
estimated_cascade: usize,
reversible_until: DateTime<Utc>,
timestamp: DateTime<Utc>,
},
MemoryUnsuppressed {
id: String,
remaining_count: i32,
timestamp: DateTime<Utc>,
},
Rac1CascadeSwept {
seeds: usize,
neighbors_affected: usize,
timestamp: DateTime<Utc>,
},
// -- Search --
SearchPerformed {
@ -119,6 +137,9 @@ pub enum VestigeEvent {
uptime_secs: u64,
memory_count: usize,
avg_retention: f64,
/// v2.0.5: memories with suppression_count > 0 (actively forgetting)
#[serde(default)]
suppressed_count: usize,
timestamp: DateTime<Utc>,
},
}

View file

@ -38,7 +38,8 @@ pub async fn list_memories(
if let Some(query) = params.q.as_ref().filter(|q| !q.trim().is_empty()) {
// Use hybrid search
let results = state.storage
let results = state
.storage
.hybrid_search(query, limit, 0.3, 0.7)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -76,7 +77,8 @@ pub async fn list_memories(
}
// No search query — list all memories
let mut nodes = state.storage
let mut nodes = state
.storage
.get_all_nodes(limit, offset)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -121,7 +123,8 @@ pub async fn get_memory(
State(state): State<AppState>,
Path(id): Path<String>,
) -> Result<Json<Value>, StatusCode> {
let node = state.storage
let node = state
.storage
.get_node(&id)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
.ok_or(StatusCode::NOT_FOUND)?;
@ -152,7 +155,8 @@ pub async fn delete_memory(
State(state): State<AppState>,
Path(id): Path<String>,
) -> Result<Json<Value>, StatusCode> {
let deleted = state.storage
let deleted = state
.storage
.delete_node(&id)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -172,7 +176,8 @@ pub async fn promote_memory(
State(state): State<AppState>,
Path(id): Path<String>,
) -> Result<Json<Value>, StatusCode> {
let node = state.storage
let node = state
.storage
.promote_memory(&id)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -194,7 +199,8 @@ pub async fn demote_memory(
State(state): State<AppState>,
Path(id): Path<String>,
) -> Result<Json<Value>, StatusCode> {
let node = state.storage
let node = state
.storage
.demote_memory(&id)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -212,10 +218,9 @@ pub async fn demote_memory(
}
/// Get system stats
pub async fn get_stats(
State(state): State<AppState>,
) -> Result<Json<Value>, StatusCode> {
let stats = state.storage
pub async fn get_stats(State(state): State<AppState>) -> Result<Json<Value>, StatusCode> {
let stats = state
.storage
.get_stats()
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -254,12 +259,14 @@ pub async fn get_timeline(
let limit = params.limit.unwrap_or(200).clamp(1, 500);
let start = Utc::now() - Duration::days(days);
let nodes = state.storage
let nodes = state
.storage
.query_time_range(Some(start), Some(Utc::now()), limit)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
// Group by day
let mut by_day: std::collections::BTreeMap<String, Vec<Value>> = std::collections::BTreeMap::new();
let mut by_day: std::collections::BTreeMap<String, Vec<Value>> =
std::collections::BTreeMap::new();
for node in &nodes {
let date = node.created_at.format("%Y-%m-%d").to_string();
let content_preview: String = {
@ -299,10 +306,9 @@ pub async fn get_timeline(
}
/// Health check
pub async fn health_check(
State(state): State<AppState>,
) -> Result<Json<Value>, StatusCode> {
let stats = state.storage
pub async fn health_check(State(state): State<AppState>) -> Result<Json<Value>, StatusCode> {
let stats = state
.storage
.get_stats()
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -353,32 +359,38 @@ pub async fn get_graph(
let center_id = if let Some(ref id) = params.center_id {
id.clone()
} else if let Some(ref query) = params.query {
let results = state.storage
let results = state
.storage
.search(query, 1)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
results.first()
results
.first()
.map(|n| n.id.clone())
.ok_or(StatusCode::NOT_FOUND)?
} else {
// Default: most connected memory (for a rich initial graph)
let most_connected = state.storage
let most_connected = state
.storage
.get_most_connected_memory()
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
if let Some(id) = most_connected {
id
} else {
// Fallback: most recent memory
let recent = state.storage
let recent = state
.storage
.get_all_nodes(1, 0)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
recent.first()
recent
.first()
.map(|n| n.id.clone())
.ok_or(StatusCode::NOT_FOUND)?
}
};
// Get subgraph
let (nodes, edges) = state.storage
let (nodes, edges) = state
.storage
.get_memory_subgraph(&center_id, depth, max_nodes)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -387,7 +399,8 @@ pub async fn get_graph(
}
// Build nodes JSON with timestamps for recency calculation
let nodes_json: Vec<Value> = nodes.iter()
let nodes_json: Vec<Value> = nodes
.iter()
.map(|n| {
let label = if n.content.chars().count() > 80 {
format!("{}...", n.content.chars().take(77).collect::<String>())
@ -407,7 +420,8 @@ pub async fn get_graph(
})
.collect();
let edges_json: Vec<Value> = edges.iter()
let edges_json: Vec<Value> = edges
.iter()
.map(|e| {
serde_json::json!({
"source": e.source_id,
@ -498,10 +512,11 @@ pub async fn search_memories(
// ============================================================================
/// Trigger a dream cycle via CognitiveEngine
pub async fn trigger_dream(
State(state): State<AppState>,
) -> Result<Json<Value>, StatusCode> {
let cognitive = state.cognitive.as_ref().ok_or(StatusCode::SERVICE_UNAVAILABLE)?;
pub async fn trigger_dream(State(state): State<AppState>) -> Result<Json<Value>, StatusCode> {
let cognitive = state
.cognitive
.as_ref()
.ok_or(StatusCode::SERVICE_UNAVAILABLE)?;
let start = std::time::Instant::now();
let memory_count: usize = 50;
@ -715,9 +730,7 @@ pub async fn explore_connections(
}
/// Predict which memories will be needed
pub async fn predict_memories(
State(state): State<AppState>,
) -> Result<Json<Value>, StatusCode> {
pub async fn predict_memories(State(state): State<AppState>) -> Result<Json<Value>, StatusCode> {
// Get recent memories as predictions based on activity
let recent = state
.storage
@ -756,7 +769,9 @@ pub async fn score_importance(
if let Some(ref cognitive) = state.cognitive {
let context = vestige_core::ImportanceContext::current();
let cog = cognitive.lock().await;
let score = cog.importance_signals.compute_importance(&req.content, &context);
let score = cog
.importance_signals
.compute_importance(&req.content, &context);
drop(cog);
let composite = score.composite;
@ -789,7 +804,11 @@ pub async fn score_importance(
// Fallback: basic heuristic scoring
let word_count = req.content.split_whitespace().count();
let has_code = req.content.contains("```") || req.content.contains("fn ");
let composite = if has_code { 0.7 } else { (word_count as f64 / 100.0).min(0.8) };
let composite = if has_code {
0.7
} else {
(word_count as f64 / 100.0).min(0.8)
};
Ok(Json(serde_json::json!({
"composite": composite,
@ -907,17 +926,35 @@ pub async fn list_intentions(
let intentions = if status_filter == "all" {
// Get all statuses
let mut all = state.storage.get_active_intentions()
.unwrap_or_default();
all.extend(state.storage.get_intentions_by_status("fulfilled").unwrap_or_default());
all.extend(state.storage.get_intentions_by_status("cancelled").unwrap_or_default());
all.extend(state.storage.get_intentions_by_status("snoozed").unwrap_or_default());
let mut all = state.storage.get_active_intentions().unwrap_or_default();
all.extend(
state
.storage
.get_intentions_by_status("fulfilled")
.unwrap_or_default(),
);
all.extend(
state
.storage
.get_intentions_by_status("cancelled")
.unwrap_or_default(),
);
all.extend(
state
.storage
.get_intentions_by_status("snoozed")
.unwrap_or_default(),
);
all
} else if status_filter == "active" {
state.storage.get_active_intentions()
state
.storage
.get_active_intentions()
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
} else {
state.storage.get_intentions_by_status(&status_filter)
state
.storage
.get_intentions_by_status(&status_filter)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
};

View file

@ -11,8 +11,8 @@ pub mod state;
pub mod static_files;
pub mod websocket;
use axum::routing::{delete, get, post};
use axum::Router;
use axum::routing::{delete, get, post};
use std::net::SocketAddr;
use std::sync::Arc;
use tokio::sync::Mutex;
@ -47,7 +47,6 @@ pub fn build_router_with_event_tx(
}
fn build_router_inner(state: AppState, port: u16) -> (Router, AppState) {
#[allow(unused_mut)]
let mut origins = vec![
format!("http://127.0.0.1:{}", port)
@ -61,8 +60,16 @@ fn build_router_inner(state: AppState, port: u16) -> (Router, AppState) {
// SvelteKit dev server — only in debug builds
#[cfg(debug_assertions)]
{
origins.push("http://localhost:5173".parse::<axum::http::HeaderValue>().expect("valid origin"));
origins.push("http://127.0.0.1:5173".parse::<axum::http::HeaderValue>().expect("valid origin"));
origins.push(
"http://localhost:5173"
.parse::<axum::http::HeaderValue>()
.expect("valid origin"),
);
origins.push(
"http://127.0.0.1:5173"
.parse::<axum::http::HeaderValue>()
.expect("valid origin"),
);
}
let cors = CorsLayer::new()
@ -116,7 +123,10 @@ fn build_router_inner(state: AppState, port: u16) -> (Router, AppState) {
let router = Router::new()
// SvelteKit Dashboard v2.0 (embedded static build)
.route("/dashboard", get(static_files::serve_dashboard_spa))
.route("/dashboard/{*path}", get(static_files::serve_dashboard_asset))
.route(
"/dashboard/{*path}",
get(static_files::serve_dashboard_asset),
)
// Legacy embedded HTML (keep for backward compat)
.route("/", get(handlers::serve_dashboard))
.route("/graph", get(handlers::serve_graph))
@ -143,7 +153,10 @@ fn build_router_inner(state: AppState, port: u16) -> (Router, AppState) {
.route("/api/predict", post(handlers::predict_memories))
.route("/api/importance", post(handlers::score_importance))
.route("/api/consolidate", post(handlers::trigger_consolidation))
.route("/api/retention-distribution", get(handlers::retention_distribution))
.route(
"/api/retention-distribution",
get(handlers::retention_distribution),
)
// Intentions (v2.0)
.route("/api/intentions", get(handlers::list_intentions))
.layer(

View file

@ -2,11 +2,11 @@
use std::sync::Arc;
use std::time::Instant;
use tokio::sync::{broadcast, Mutex};
use tokio::sync::{Mutex, broadcast};
use vestige_core::Storage;
use crate::cognitive::CognitiveEngine;
use super::events::VestigeEvent;
use crate::cognitive::CognitiveEngine;
/// Broadcast channel capacity — how many events can buffer before old ones drop.
const EVENT_CHANNEL_CAPACITY: usize = 1024;
@ -22,10 +22,7 @@ pub struct AppState {
impl AppState {
/// Create a new AppState with event broadcasting.
pub fn new(
storage: Arc<Storage>,
cognitive: Option<Arc<Mutex<CognitiveEngine>>>,
) -> Self {
pub fn new(storage: Arc<Storage>, cognitive: Option<Arc<Mutex<CognitiveEngine>>>) -> Self {
let (event_tx, _) = broadcast::channel(EVENT_CHANNEL_CAPACITY);
Self {
storage,

View file

@ -4,9 +4,9 @@
//! using `include_dir!`. This serves it at `/dashboard/` prefix.
use axum::extract::Path;
use axum::http::{header, StatusCode};
use axum::http::{StatusCode, header};
use axum::response::{Html, IntoResponse, Response};
use include_dir::{include_dir, Dir};
use include_dir::{Dir, include_dir};
/// Embed the entire SvelteKit build output into the binary.
/// Build with: cd apps/dashboard && pnpm build
@ -16,11 +16,11 @@ static DASHBOARD_DIR: Dir<'_> = include_dir!("$CARGO_MANIFEST_DIR/../../apps/das
/// Serve the SvelteKit dashboard index
pub async fn serve_dashboard_spa() -> impl IntoResponse {
match DASHBOARD_DIR.get_file("index.html") {
Some(file) => Html(
String::from_utf8_lossy(file.contents()).to_string(),
Some(file) => Html(String::from_utf8_lossy(file.contents()).to_string()).into_response(),
None => (
StatusCode::NOT_FOUND,
"Dashboard not built. Run: cd apps/dashboard && pnpm build",
)
.into_response(),
None => (StatusCode::NOT_FOUND, "Dashboard not built. Run: cd apps/dashboard && pnpm build")
.into_response(),
}
}

View file

@ -3,8 +3,8 @@
//! Clients connect to `/ws` and receive all VestigeEvents as JSON.
//! Also sends heartbeats every 5 seconds with system stats.
use axum::extract::ws::{Message, WebSocket, WebSocketUpgrade};
use axum::extract::State;
use axum::extract::ws::{Message, WebSocket, WebSocketUpgrade};
use axum::http::{HeaderMap, StatusCode};
use axum::response::IntoResponse;
use chrono::Utc;
@ -26,10 +26,11 @@ pub async fn ws_handler(
// Non-browser clients (curl, wscat) won't have Origin — allowed since localhost-only.
match headers.get("origin").and_then(|v| v.to_str().ok()) {
Some(origin) => {
let allowed = origin.starts_with("http://127.0.0.1:")
|| origin.starts_with("http://localhost:");
let allowed =
origin.starts_with("http://127.0.0.1:") || origin.starts_with("http://localhost:");
#[cfg(debug_assertions)]
let allowed = allowed || origin == "http://localhost:5173" || origin == "http://127.0.0.1:5173";
let allowed =
allowed || origin == "http://localhost:5173" || origin == "http://127.0.0.1:5173";
if !allowed {
warn!("Rejected WebSocket connection from origin: {}", origin);
return StatusCode::FORBIDDEN.into_response();
@ -85,10 +86,14 @@ async fn handle_socket(socket: WebSocket, state: AppState) {
.map(|s| (s.total_nodes as usize, s.average_retention))
.unwrap_or((0, 0.0));
// v2.0.5: live count of memories being actively forgotten
let suppressed_count = heartbeat_state.storage.count_suppressed().unwrap_or(0);
let event = VestigeEvent::Heartbeat {
uptime_secs: uptime,
memory_count,
avg_retention,
suppressed_count,
timestamp: Utc::now(),
};

View file

@ -35,7 +35,7 @@ use std::io;
use std::path::PathBuf;
use std::sync::Arc;
use tokio::sync::Mutex;
use tracing::{error, info, warn, Level};
use tracing::{Level, error, info, warn};
use tracing_subscriber::EnvFilter;
// Use vestige-core for the cognitive science engine
@ -82,8 +82,12 @@ fn parse_args() -> Config {
println!(" --http-port <PORT> HTTP transport port (default: 3928)");
println!();
println!("ENVIRONMENT:");
println!(" RUST_LOG Log level filter (e.g., debug, info, warn, error)");
println!(" VESTIGE_AUTH_TOKEN Override the bearer token for HTTP transport");
println!(
" RUST_LOG Log level filter (e.g., debug, info, warn, error)"
);
println!(
" VESTIGE_AUTH_TOKEN Override the bearer token for HTTP transport"
);
println!(" VESTIGE_HTTP_PORT HTTP transport port (default: 3928)");
println!(" VESTIGE_DASHBOARD_ENABLED Enable dashboard (default: disabled)");
println!(" VESTIGE_DASHBOARD_PORT Dashboard port (default: 3927)");
@ -153,7 +157,11 @@ fn parse_args() -> Config {
i += 1;
}
Config { data_dir, http_port, dashboard_enabled }
Config {
data_dir,
http_port,
dashboard_enabled,
}
}
#[tokio::main]
@ -163,16 +171,16 @@ async fn main() {
// Initialize logging to stderr (stdout is for JSON-RPC)
tracing_subscriber::fmt()
.with_env_filter(
EnvFilter::from_default_env()
.add_directive(Level::INFO.into())
)
.with_env_filter(EnvFilter::from_default_env().add_directive(Level::INFO.into()))
.with_writer(io::stderr)
.with_target(false)
.with_ansi(false)
.init();
info!("Vestige MCP Server v{} starting...", env!("CARGO_PKG_VERSION"));
info!(
"Vestige MCP Server v{} starting...",
env!("CARGO_PKG_VERSION")
);
// Initialize storage with optional custom data directory
let storage = match Storage::new(config.data_dir) {
@ -185,7 +193,9 @@ async fn main() {
if let Err(e) = s.init_embeddings() {
error!("Failed to initialize embedding service: {}", e);
error!("Smart ingest will fall back to regular ingest without deduplication");
error!("Hint: Check FASTEMBED_CACHE_PATH or ensure ~/.cache/vestige/fastembed is writable");
error!(
"Hint: Check FASTEMBED_CACHE_PATH or ensure ~/.cache/vestige/fastembed is writable"
);
} else {
info!("Embedding service initialized successfully");
}
@ -233,7 +243,10 @@ async fn main() {
true
}
Err(e) => {
warn!("Could not read consolidation history: {} — running anyway", e);
warn!(
"Could not read consolidation history: {} — running anyway",
e
);
true
}
};
@ -255,6 +268,23 @@ async fn main() {
warn!("Periodic auto-consolidation failed: {}", e);
}
}
// v2.0.5: Rac1 cascade sweep — walk recently-suppressed
// memories and fade their co-activated neighbors
// (Cervantes-Sandoval & Davis 2020, PMC7477079).
match storage_clone.run_rac1_cascade_sweep() {
Ok((seeds, affected)) if seeds > 0 || affected > 0 => {
info!(
suppressed_seeds = seeds,
neighbors_affected = affected,
"Rac1 cascade sweep complete"
);
}
Ok(_) => {}
Err(e) => {
warn!("Rac1 cascade sweep failed: {}", e);
}
}
}
// Sleep until next check
@ -273,7 +303,8 @@ async fn main() {
info!("CognitiveEngine initialized and hydrated");
// Create shared event broadcast channel for dashboard <-> MCP tool events
let (event_tx, _) = tokio::sync::broadcast::channel::<vestige_mcp::dashboard::events::VestigeEvent>(1024);
let (event_tx, _) =
tokio::sync::broadcast::channel::<vestige_mcp::dashboard::events::VestigeEvent>(1024);
// Spawn dashboard HTTP server alongside MCP server (now with CognitiveEngine access)
if config.dashboard_enabled {
@ -290,7 +321,9 @@ async fn main() {
Some(dashboard_cognitive),
dashboard_event_tx,
dashboard_port,
).await {
)
.await
{
Ok(_state) => {
info!("Dashboard started with WebSocket + CognitiveEngine + shared event bus");
}
@ -312,7 +345,8 @@ async fn main() {
match protocol::auth::get_or_create_auth_token() {
Ok(token) => {
let bind = std::env::var("VESTIGE_HTTP_BIND").unwrap_or_else(|_| "127.0.0.1".to_string());
let bind =
std::env::var("VESTIGE_HTTP_BIND").unwrap_or_else(|_| "127.0.0.1".to_string());
eprintln!("Vestige HTTP transport: http://{}:{}/mcp", bind, http_port);
eprintln!("Auth token: {}...", &token[..token.len().min(8)]);
tokio::spawn(async move {
@ -330,7 +364,10 @@ async fn main() {
});
}
Err(e) => {
warn!("Could not create auth token, HTTP transport disabled: {}", e);
warn!(
"Could not create auth token, HTTP transport disabled: {}",
e
);
}
}
}

View file

@ -17,17 +17,17 @@ use axum::response::IntoResponse;
use axum::routing::{delete, post};
use axum::{Json, Router};
use subtle::ConstantTimeEq;
use tokio::sync::{broadcast, Mutex, RwLock};
use tokio::sync::{Mutex, RwLock, broadcast};
use tower::ServiceBuilder;
use tower::limit::ConcurrencyLimitLayer;
use tower_http::cors::CorsLayer;
use tracing::{info, warn};
use crate::cognitive::CognitiveEngine;
use crate::dashboard::events::VestigeEvent;
use crate::protocol::types::JsonRpcRequest;
use crate::server::McpServer;
use vestige_core::Storage;
use crate::dashboard::events::VestigeEvent;
/// Maximum concurrent sessions.
const MAX_SESSIONS: usize = 100;
@ -95,7 +95,11 @@ pub async fn start_http_transport(
});
let removed = before - map.len();
if removed > 0 {
info!("Session reaper: removed {} idle sessions ({} active)", removed, map.len());
info!(
"Session reaper: removed {} idle sessions ({} active)",
removed,
map.len()
);
}
}
});
@ -119,8 +123,15 @@ pub async fn start_http_transport(
.filter_map(|s| s.parse().ok())
.collect::<Vec<_>>(),
)
.allow_methods([axum::http::Method::POST, axum::http::Method::DELETE, axum::http::Method::OPTIONS])
.allow_headers([axum::http::header::CONTENT_TYPE, axum::http::header::AUTHORIZATION])
.allow_methods([
axum::http::Method::POST,
axum::http::Method::DELETE,
axum::http::Method::OPTIONS,
])
.allow_headers([
axum::http::header::CONTENT_TYPE,
axum::http::header::AUTHORIZATION,
]),
),
)
.with_state(state);
@ -156,9 +167,10 @@ fn validate_auth(headers: &HeaderMap, expected: &str) -> Result<(), (StatusCode,
.and_then(|v| v.to_str().ok())
.ok_or((StatusCode::UNAUTHORIZED, "Missing Authorization header"))?;
let token = header
.strip_prefix("Bearer ")
.ok_or((StatusCode::UNAUTHORIZED, "Invalid Authorization scheme (expected Bearer)"))?;
let token = header.strip_prefix("Bearer ").ok_or((
StatusCode::UNAUTHORIZED,
"Invalid Authorization scheme (expected Bearer)",
))?;
// Constant-time comparison: prevents timing side-channel attacks.
// We first check lengths match (length itself is not secret since UUIDs
@ -209,11 +221,7 @@ async fn post_mcp(
// Take write lock immediately to avoid TOCTOU race on MAX_SESSIONS check.
let mut sessions = state.sessions.write().await;
if sessions.len() >= MAX_SESSIONS {
return (
StatusCode::SERVICE_UNAVAILABLE,
"Too many active sessions",
)
.into_response();
return (StatusCode::SERVICE_UNAVAILABLE, "Too many active sessions").into_response();
}
let server = McpServer::new_with_events(
@ -242,13 +250,23 @@ async fn post_mcp(
match response {
Some(resp) => {
let mut resp_headers = HeaderMap::new();
resp_headers.insert("mcp-session-id", session_id.parse().unwrap_or_else(|_| axum::http::HeaderValue::from_static("invalid")));
resp_headers.insert(
"mcp-session-id",
session_id
.parse()
.unwrap_or_else(|_| axum::http::HeaderValue::from_static("invalid")),
);
(StatusCode::OK, resp_headers, Json(resp)).into_response()
}
None => {
// Notifications return 202
let mut resp_headers = HeaderMap::new();
resp_headers.insert("mcp-session-id", session_id.parse().unwrap_or_else(|_| axum::http::HeaderValue::from_static("invalid")));
resp_headers.insert(
"mcp-session-id",
session_id
.parse()
.unwrap_or_else(|_| axum::http::HeaderValue::from_static("invalid")),
);
(StatusCode::ACCEPTED, resp_headers).into_response()
}
}
@ -273,11 +291,7 @@ async fn post_mcp(
let session = match session {
Some(s) => s,
None => {
return (
StatusCode::NOT_FOUND,
"Session not found or expired",
)
.into_response();
return (StatusCode::NOT_FOUND, "Session not found or expired").into_response();
}
};
@ -288,7 +302,12 @@ async fn post_mcp(
};
let mut resp_headers = HeaderMap::new();
resp_headers.insert("mcp-session-id", session_id.parse().unwrap_or_else(|_| axum::http::HeaderValue::from_static("invalid")));
resp_headers.insert(
"mcp-session-id",
session_id
.parse()
.unwrap_or_else(|_| axum::http::HeaderValue::from_static("invalid")),
);
match response {
Some(resp) => (StatusCode::OK, resp_headers, Json(resp)).into_response(),
@ -308,7 +327,13 @@ async fn delete_mcp(
let session_id = match session_id_from_headers(&headers) {
Some(id) => id,
None => return (StatusCode::BAD_REQUEST, "Missing or invalid Mcp-Session-Id header").into_response(),
None => {
return (
StatusCode::BAD_REQUEST,
"Missing or invalid Mcp-Session-Id header",
)
.into_response();
}
};
let mut sessions = state.sessions.write().await;

View file

@ -25,7 +25,6 @@ pub struct JsonRpcRequest {
pub params: Option<Value>,
}
/// JSON-RPC Response
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct JsonRpcResponse {
@ -129,7 +128,10 @@ impl JsonRpcError {
#[allow(dead_code)] // Reserved for future resource handling
pub fn resource_not_found(uri: &str) -> Self {
Self::new(ErrorCode::ResourceNotFound, &format!("Resource not found: {}", uri))
Self::new(
ErrorCode::ResourceNotFound,
&format!("Resource not found: {}", uri),
)
}
}

View file

@ -35,15 +35,10 @@ pub async fn read(storage: &Arc<Storage>, uri: &str) -> Result<String, String> {
fn parse_query_param(query: Option<&str>, key: &str, default: i32) -> i32 {
query
.and_then(|q| {
q.split('&')
.find_map(|pair| {
let (k, v) = pair.split_once('=')?;
if k == key {
v.parse().ok()
} else {
None
}
})
q.split('&').find_map(|pair| {
let (k, v) = pair.split_once('=')?;
if k == key { v.parse().ok() } else { None }
})
})
.unwrap_or(default)
.clamp(1, 100)
@ -228,7 +223,10 @@ async fn read_intentions(storage: &Arc<Storage>) -> Result<String, String> {
})
.collect();
let overdue_count = items.iter().filter(|i| i["isOverdue"].as_bool().unwrap_or(false)).count();
let overdue_count = items
.iter()
.filter(|i| i["isOverdue"].as_bool().unwrap_or(false))
.count();
let result = serde_json::json!({
"total": intentions.len(),
@ -241,7 +239,9 @@ async fn read_intentions(storage: &Arc<Storage>) -> Result<String, String> {
}
async fn read_triggered_intentions(storage: &Arc<Storage>) -> Result<String, String> {
let overdue = storage.get_overdue_intentions().map_err(|e| e.to_string())?;
let overdue = storage
.get_overdue_intentions()
.map_err(|e| e.to_string())?;
let now = chrono::Utc::now();
let items: Vec<serde_json::Value> = overdue
@ -289,7 +289,10 @@ async fn read_insights(storage: &Arc<Storage>) -> Result<String, String> {
let insights = storage.get_insights(50).map_err(|e| e.to_string())?;
let pending: Vec<_> = insights.iter().filter(|i| i.feedback.is_none()).collect();
let accepted: Vec<_> = insights.iter().filter(|i| i.feedback.as_deref() == Some("accepted")).collect();
let accepted: Vec<_> = insights
.iter()
.filter(|i| i.feedback.as_deref() == Some("accepted"))
.collect();
let items: Vec<serde_json::Value> = insights
.iter()
@ -319,8 +322,12 @@ async fn read_insights(storage: &Arc<Storage>) -> Result<String, String> {
}
async fn read_consolidation_log(storage: &Arc<Storage>) -> Result<String, String> {
let history = storage.get_consolidation_history(20).map_err(|e| e.to_string())?;
let last_run = storage.get_last_consolidation().map_err(|e| e.to_string())?;
let history = storage
.get_consolidation_history(20)
.map_err(|e| e.to_string())?;
let last_run = storage
.get_last_consolidation()
.map_err(|e| e.to_string())?;
let items: Vec<serde_json::Value> = history
.iter()

File diff suppressed because it is too large Load diff

View file

@ -54,10 +54,7 @@ struct ChangelogArgs {
}
/// Execute memory_changelog tool
pub async fn execute(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: ChangelogArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => ChangelogArgs {
@ -80,11 +77,7 @@ pub async fn execute(
}
/// Per-memory changelog: state transition audit trail
fn execute_per_memory(
storage: &Storage,
memory_id: &str,
limit: i32,
) -> Result<Value, String> {
fn execute_per_memory(storage: &Storage, memory_id: &str, limit: i32) -> Result<Value, String> {
// Validate UUID format
Uuid::parse_str(memory_id)
.map_err(|_| format!("Invalid memory_id '{}'. Must be a valid UUID.", memory_id))?;
@ -126,10 +119,7 @@ fn execute_per_memory(
}
/// System-wide changelog: consolidations + recent state transitions
fn execute_system_wide(
storage: &Storage,
limit: i32,
) -> Result<Value, String> {
fn execute_system_wide(storage: &Storage, limit: i32) -> Result<Value, String> {
// Get consolidation history
let consolidations = storage
.get_consolidation_history(limit)
@ -141,9 +131,7 @@ fn execute_system_wide(
.map_err(|e| e.to_string())?;
// Get dream history (Bug #9 fix — dreams were invisible to audit trail)
let dreams = storage
.get_dream_history(limit)
.unwrap_or_default();
let dreams = storage.get_dream_history(limit).unwrap_or_default();
// Build unified event list
let mut events: Vec<(DateTime<Utc>, Value)> = Vec::new();
@ -296,8 +284,7 @@ mod tests {
#[tokio::test]
async fn test_changelog_per_memory_nonexistent() {
let (storage, _dir) = test_storage().await;
let args =
serde_json::json!({ "memory_id": "00000000-0000-0000-0000-000000000000" });
let args = serde_json::json!({ "memory_id": "00000000-0000-0000-0000-000000000000" });
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("not found"));

View file

@ -7,7 +7,6 @@ use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use vestige_core::{IngestInput, Storage};
/// Input schema for session_checkpoint tool
@ -63,10 +62,7 @@ struct CheckpointItem {
source: Option<String>,
}
pub async fn execute(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: CheckpointArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),

View file

@ -7,7 +7,6 @@ use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use vestige_core::{IngestInput, Storage};
/// Input schema for remember_pattern tool
@ -114,10 +113,7 @@ struct ContextArgs {
limit: Option<i32>,
}
pub async fn execute_pattern(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_pattern(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: PatternArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
@ -206,7 +202,11 @@ pub async fn execute_decision(
}
// Build tags
let mut tags = vec!["decision".to_string(), "architecture".to_string(), "codebase".to_string()];
let mut tags = vec![
"decision".to_string(),
"architecture".to_string(),
"codebase".to_string(),
];
if let Some(ref codebase) = args.codebase {
tags.push(format!("codebase:{}", codebase));
}
@ -231,10 +231,7 @@ pub async fn execute_decision(
}))
}
pub async fn execute_context(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_context(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: ContextArgs = args
.map(serde_json::from_value)
.transpose()

View file

@ -161,7 +161,8 @@ async fn execute_remember_pattern(
// ====================================================================
if let Ok(cog) = cognitive.try_lock() {
let codebase_name = args.codebase.as_deref().unwrap_or("default");
cog.cross_project.record_project_memory(&node_id, codebase_name, None);
cog.cross_project
.record_project_memory(&node_id, codebase_name, None);
// Also index in hippocampal index for fast retrieval
let _ = cog.hippocampal_index.index_memory(
@ -256,7 +257,8 @@ async fn execute_remember_decision(
// ====================================================================
if let Ok(cog) = cognitive.try_lock() {
let codebase_name = args.codebase.as_deref().unwrap_or("default");
cog.cross_project.record_project_memory(&node_id, codebase_name, None);
cog.cross_project
.record_project_memory(&node_id, codebase_name, None);
// Index in hippocampal index
let _ = cog.hippocampal_index.index_memory(
@ -285,10 +287,7 @@ async fn execute_get_context(
let limit = args.limit.unwrap_or(10).clamp(1, 50);
// Build tag filter for codebase
let tag_filter = args
.codebase
.as_ref()
.map(|cb| format!("codebase:{}", cb));
let tag_filter = args.codebase.as_ref().map(|cb| format!("codebase:{}", cb));
// Query patterns by node_type and tag
let patterns = storage
@ -377,18 +376,24 @@ mod tests {
// Check action enum values
let action_enum = &schema["properties"]["action"]["enum"];
assert!(action_enum
.as_array()
.unwrap()
.contains(&serde_json::json!("remember_pattern")));
assert!(action_enum
.as_array()
.unwrap()
.contains(&serde_json::json!("remember_decision")));
assert!(action_enum
.as_array()
.unwrap()
.contains(&serde_json::json!("get_context")));
assert!(
action_enum
.as_array()
.unwrap()
.contains(&serde_json::json!("remember_pattern"))
);
assert!(
action_enum
.as_array()
.unwrap()
.contains(&serde_json::json!("remember_decision"))
);
assert!(
action_enum
.as_array()
.unwrap()
.contains(&serde_json::json!("get_context"))
);
}
// === INTEGRATION TESTS ===

View file

@ -7,7 +7,6 @@ use chrono::Utc;
use serde_json::Value;
use std::sync::Arc;
use vestige_core::{RecallInput, SearchMode, Storage};
/// Input schema for match_context tool
@ -50,19 +49,18 @@ pub fn schema() -> Value {
})
}
pub async fn execute(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args = args.ok_or("Missing arguments")?;
let query = args["query"]
.as_str()
.ok_or("query is required")?;
let query = args["query"].as_str().ok_or("query is required")?;
let topics: Vec<String> = args["topics"]
.as_array()
.map(|arr| arr.iter().filter_map(|v| v.as_str().map(String::from)).collect())
.map(|arr| {
arr.iter()
.filter_map(|v| v.as_str().map(String::from))
.collect()
})
.unwrap_or_default();
let project = args["project"].as_str().map(String::from);
@ -83,11 +81,11 @@ pub async fn execute(
search_mode: SearchMode::Hybrid,
valid_at: None,
};
let candidates = storage.recall(recall_input)
.map_err(|e| e.to_string())?;
let candidates = storage.recall(recall_input).map_err(|e| e.to_string())?;
// Score by context match (simplified implementation)
let mut scored_results: Vec<_> = candidates.into_iter()
let mut scored_results: Vec<_> = candidates
.into_iter()
.map(|mem| {
// Calculate context score based on:
// 1. Temporal proximity (how recent)
@ -98,8 +96,14 @@ pub async fn execute(
let tag_overlap = if topics.is_empty() {
0.5 // Neutral if no topics specified
} else {
let matching = mem.tags.iter()
.filter(|t| topics.iter().any(|topic| topic.to_lowercase().contains(&t.to_lowercase())))
let matching = mem
.tags
.iter()
.filter(|t| {
topics
.iter()
.any(|topic| topic.to_lowercase().contains(&t.to_lowercase()))
})
.count();
matching as f64 / topics.len().max(1) as f64
};
@ -136,7 +140,8 @@ pub async fn execute(
scored_results.sort_by(|a, b| b.2.partial_cmp(&a.2).unwrap_or(std::cmp::Ordering::Equal));
scored_results.truncate(limit as usize);
let results: Vec<Value> = scored_results.into_iter()
let results: Vec<Value> = scored_results
.into_iter()
.map(|(mem, ctx_score, combined)| {
serde_json::json!({
"id": mem.id,

View file

@ -72,20 +72,64 @@ fn compute_trust(retention: f64, stability: f64, reps: i32, lapses: i32) -> f64
#[derive(Debug, Clone, PartialEq)]
enum QueryIntent {
FactCheck, // "Is X true?" → find support/contradiction evidence
Timeline, // "When did X happen?" → temporal ordering + pattern detection
RootCause, // "Why did X happen?" → causal chain backward
Comparison, // "How does X differ from Y?" → diff two memory clusters
Synthesis, // Default: "What do I know about X?" → cluster + best per cluster
FactCheck, // "Is X true?" → find support/contradiction evidence
Timeline, // "When did X happen?" → temporal ordering + pattern detection
RootCause, // "Why did X happen?" → causal chain backward
Comparison, // "How does X differ from Y?" → diff two memory clusters
Synthesis, // Default: "What do I know about X?" → cluster + best per cluster
}
fn classify_intent(query: &str) -> QueryIntent {
let q = query.to_lowercase();
let patterns: &[(QueryIntent, &[&str])] = &[
(QueryIntent::RootCause, &["why did", "root cause", "what caused", "because of", "reason for", "why is", "why was"]),
(QueryIntent::Timeline, &["when did", "timeline", "history of", "over time", "how has", "evolution of", "sequence of"]),
(QueryIntent::Comparison, &["differ", "compare", "versus", " vs ", "difference between", "changed from"]),
(QueryIntent::FactCheck, &["is it true", "did i", "was there", "verify", "confirm", "is this correct", "should i use", "should we"]),
(
QueryIntent::RootCause,
&[
"why did",
"root cause",
"what caused",
"because of",
"reason for",
"why is",
"why was",
],
),
(
QueryIntent::Timeline,
&[
"when did",
"timeline",
"history of",
"over time",
"how has",
"evolution of",
"sequence of",
],
),
(
QueryIntent::Comparison,
&[
"differ",
"compare",
"versus",
" vs ",
"difference between",
"changed from",
],
),
(
QueryIntent::FactCheck,
&[
"is it true",
"did i",
"was there",
"verify",
"confirm",
"is this correct",
"should i use",
"should we",
],
),
];
for (intent, keywords) in patterns {
if keywords.iter().any(|kw| q.contains(kw)) {
@ -118,9 +162,15 @@ struct RelationAssessment {
/// Assess the relationship between two memories using embedding similarity,
/// correction signals, temporal ordering, and trust comparison.
/// No LLM needed — pure algorithmic assessment.
fn assess_relation(a_content: &str, b_content: &str, a_trust: f64, b_trust: f64,
a_date: chrono::DateTime<Utc>, b_date: chrono::DateTime<Utc>,
topic_sim: f32) -> RelationAssessment {
fn assess_relation(
a_content: &str,
b_content: &str,
a_trust: f64,
b_trust: f64,
a_date: chrono::DateTime<Utc>,
b_date: chrono::DateTime<Utc>,
topic_sim: f32,
) -> RelationAssessment {
// Irrelevant: different topics
if topic_sim < 0.15 {
return RelationAssessment {
@ -136,12 +186,21 @@ fn assess_relation(a_content: &str, b_content: &str, a_trust: f64, b_trust: f64,
// Supersession: same topic + newer + higher trust
if topic_sim > 0.4 && time_delta_days > 0 && trust_diff > 0.05 && !has_correction {
let (newer, older) = if b_date > a_date { ("B", "A") } else { ("A", "B") };
let (newer, older) = if b_date > a_date {
("B", "A")
} else {
("A", "B")
};
return RelationAssessment {
relation: Relation::Supersedes,
confidence: topic_sim as f64 * (0.5 + trust_diff.min(0.5)),
reasoning: format!("{} supersedes {} (newer by {}d, trust +{:.0}%)",
newer, older, time_delta_days, trust_diff * 100.0),
reasoning: format!(
"{} supersedes {} (newer by {}d, trust +{:.0}%)",
newer,
older,
time_delta_days,
trust_diff * 100.0
),
};
}
@ -150,7 +209,10 @@ fn assess_relation(a_content: &str, b_content: &str, a_trust: f64, b_trust: f64,
return RelationAssessment {
relation: Relation::Contradicts,
confidence: topic_sim as f64 * 0.8,
reasoning: format!("Contradiction detected (similarity {:.2}, correction signals present)", topic_sim),
reasoning: format!(
"Contradiction detected (similarity {:.2}, correction signals present)",
topic_sim
),
};
}
@ -159,7 +221,10 @@ fn assess_relation(a_content: &str, b_content: &str, a_trust: f64, b_trust: f64,
return RelationAssessment {
relation: Relation::Supports,
confidence: topic_sim as f64,
reasoning: format!("Topically aligned (similarity {:.2}), consistent stance", topic_sim),
reasoning: format!(
"Topically aligned (similarity {:.2}), consistent stance",
topic_sim
),
};
}
@ -188,29 +253,19 @@ fn generate_reasoning_chain(
// Intent-specific opening
match intent {
QueryIntent::FactCheck => {
chain.push_str(&format!(
"FACT CHECK: \"{}\"\n\n", query
));
chain.push_str(&format!("FACT CHECK: \"{}\"\n\n", query));
}
QueryIntent::Timeline => {
chain.push_str(&format!(
"TIMELINE: \"{}\"\n\n", query
));
chain.push_str(&format!("TIMELINE: \"{}\"\n\n", query));
}
QueryIntent::RootCause => {
chain.push_str(&format!(
"ROOT CAUSE ANALYSIS: \"{}\"\n\n", query
));
chain.push_str(&format!("ROOT CAUSE ANALYSIS: \"{}\"\n\n", query));
}
QueryIntent::Comparison => {
chain.push_str(&format!(
"COMPARISON: \"{}\"\n\n", query
));
chain.push_str(&format!("COMPARISON: \"{}\"\n\n", query));
}
QueryIntent::Synthesis => {
chain.push_str(&format!(
"SYNTHESIS: \"{}\"\n\n", query
));
chain.push_str(&format!("SYNTHESIS: \"{}\"\n\n", query));
}
}
@ -223,7 +278,8 @@ fn generate_reasoning_chain(
));
// Superseded memories — with reasoning arrows
let superseded: Vec<_> = relations.iter()
let superseded: Vec<_> = relations
.iter()
.filter(|(_, _, r)| matches!(r.relation, Relation::Supersedes))
.collect();
for (preview, trust, rel) in &superseded {
@ -236,11 +292,13 @@ fn generate_reasoning_chain(
}
// Supporting evidence
let supporting: Vec<_> = relations.iter()
let supporting: Vec<_> = relations
.iter()
.filter(|(_, _, r)| matches!(r.relation, Relation::Supports))
.collect();
if !supporting.is_empty() {
chain.push_str(&format!("SUPPORTED BY {} MEMOR{}:\n",
chain.push_str(&format!(
"SUPPORTED BY {} MEMOR{}:\n",
supporting.len(),
if supporting.len() == 1 { "Y" } else { "IES" },
));
@ -254,11 +312,15 @@ fn generate_reasoning_chain(
}
// Contradicting evidence
let contradicting: Vec<_> = relations.iter()
let contradicting: Vec<_> = relations
.iter()
.filter(|(_, _, r)| matches!(r.relation, Relation::Contradicts))
.collect();
if !contradicting.is_empty() {
chain.push_str(&format!("CONTRADICTING EVIDENCE ({}):\n", contradicting.len()));
chain.push_str(&format!(
"CONTRADICTING EVIDENCE ({}):\n",
contradicting.len()
));
for (preview, trust, rel) in contradicting.iter().take(3) {
chain.push_str(&format!(
" ! (trust {:.0}%): \"{}\"\n -> {}\n",
@ -284,36 +346,61 @@ fn generate_reasoning_chain(
// ============================================================================
const NEGATION_PAIRS: &[(&str, &str)] = &[
("don't", "do"), ("never", "always"), ("avoid", "use"),
("wrong", "right"), ("incorrect", "correct"),
("deprecated", "recommended"), ("outdated", "current"),
("removed", "added"), ("disabled", "enabled"),
("not ", ""), ("no longer", ""),
("don't", "do"),
("never", "always"),
("avoid", "use"),
("wrong", "right"),
("incorrect", "correct"),
("deprecated", "recommended"),
("outdated", "current"),
("removed", "added"),
("disabled", "enabled"),
("not ", ""),
("no longer", ""),
];
const CORRECTION_SIGNALS: &[&str] = &[
"actually", "correction", "update:", "updated:", "fixed",
"was wrong", "changed to", "now uses", "replaced by",
"superseded", "no longer", "instead of", "switched to", "migrated to",
"actually",
"correction",
"update:",
"updated:",
"fixed",
"was wrong",
"changed to",
"now uses",
"replaced by",
"superseded",
"no longer",
"instead of",
"switched to",
"migrated to",
];
fn appears_contradictory(a: &str, b: &str) -> bool {
let a_lower = a.to_lowercase();
let b_lower = b.to_lowercase();
let a_words: std::collections::HashSet<&str> = a_lower.split_whitespace().filter(|w| w.len() > 3).collect();
let b_words: std::collections::HashSet<&str> = b_lower.split_whitespace().filter(|w| w.len() > 3).collect();
let a_words: std::collections::HashSet<&str> =
a_lower.split_whitespace().filter(|w| w.len() > 3).collect();
let b_words: std::collections::HashSet<&str> =
b_lower.split_whitespace().filter(|w| w.len() > 3).collect();
let shared_words = a_words.intersection(&b_words).count();
if shared_words < 2 { return false; }
if shared_words < 2 {
return false;
}
for (neg, _) in NEGATION_PAIRS {
if (a_lower.contains(neg) && !b_lower.contains(neg))
|| (b_lower.contains(neg) && !a_lower.contains(neg))
{ return true; }
{
return true;
}
}
for signal in CORRECTION_SIGNALS {
if a_lower.contains(signal) || b_lower.contains(signal) { return true; }
if a_lower.contains(signal) || b_lower.contains(signal) {
return true;
}
}
false
}
@ -321,12 +408,20 @@ fn appears_contradictory(a: &str, b: &str) -> bool {
fn topic_overlap(a: &str, b: &str) -> f32 {
let a_lower = a.to_lowercase();
let b_lower = b.to_lowercase();
let a_words: std::collections::HashSet<&str> = a_lower.split_whitespace().filter(|w| w.len() > 3).collect();
let b_words: std::collections::HashSet<&str> = b_lower.split_whitespace().filter(|w| w.len() > 3).collect();
if a_words.is_empty() || b_words.is_empty() { return 0.0; }
let a_words: std::collections::HashSet<&str> =
a_lower.split_whitespace().filter(|w| w.len() > 3).collect();
let b_words: std::collections::HashSet<&str> =
b_lower.split_whitespace().filter(|w| w.len() > 3).collect();
if a_words.is_empty() || b_words.is_empty() {
return 0.0;
}
let intersection = a_words.intersection(&b_words).count();
let union = a_words.union(&b_words).count();
if union == 0 { 0.0 } else { intersection as f32 / union as f32 }
if union == 0 {
0.0
} else {
intersection as f32 / union as f32
}
}
// ============================================================================
@ -389,7 +484,10 @@ pub async fn execute(
let mut ranked = results;
if let Ok(mut cog) = cognitive.try_lock() {
let candidates: Vec<_> = ranked.iter().map(|r| (r.clone(), r.node.content.clone())).collect();
let candidates: Vec<_> = ranked
.iter()
.map(|r| (r.clone(), r.node.content.clone()))
.collect();
if let Ok(reranked) = cog.reranker.rerank(&args.query, candidates, Some(depth)) {
ranked = reranked.into_iter().map(|rr| rr.item).collect();
}
@ -399,7 +497,8 @@ pub async fn execute(
// STAGE 2: Spreading Activation Expansion
// ====================================================================
let mut activation_expanded = 0usize;
let existing_ids: std::collections::HashSet<String> = ranked.iter().map(|r| r.node.id.clone()).collect();
let existing_ids: std::collections::HashSet<String> =
ranked.iter().map(|r| r.node.id.clone()).collect();
if let Ok(mut cog) = cognitive.try_lock() {
let mut expanded_ids = Vec::new();
@ -431,24 +530,27 @@ pub async fn execute(
// STAGE 3: FSRS-6 Trust Scoring
// ====================================================================
let scored: Vec<ScoredMemory> = ranked.iter().map(|r| {
let trust = compute_trust(
r.node.retention_strength,
r.node.stability,
r.node.reps,
r.node.lapses,
);
ScoredMemory {
id: r.node.id.clone(),
content: r.node.content.clone(),
tags: r.node.tags.clone(),
trust,
updated_at: r.node.updated_at,
created_at: r.node.created_at,
retention: r.node.retention_strength,
combined_score: r.combined_score,
}
}).collect();
let scored: Vec<ScoredMemory> = ranked
.iter()
.map(|r| {
let trust = compute_trust(
r.node.retention_strength,
r.node.stability,
r.node.reps,
r.node.lapses,
);
ScoredMemory {
id: r.node.id.clone(),
content: r.node.content.clone(),
tags: r.node.tags.clone(),
trust,
updated_at: r.node.updated_at,
created_at: r.node.created_at,
retention: r.node.retention_strength,
combined_score: r.combined_score,
}
})
.collect();
// ====================================================================
// STAGE 4: Temporal Supersession
@ -488,14 +590,20 @@ pub async fn execute(
let a = &scored[i];
let b = &scored[j];
let overlap = topic_overlap(&a.content, &b.content);
if overlap < 0.15 { continue; }
if overlap < 0.15 {
continue;
}
let is_contradiction = appears_contradictory(&a.content, &b.content);
if !is_contradiction { continue; }
if !is_contradiction {
continue;
}
// Only flag as real contradiction if BOTH have decent trust
let min_trust = a.trust.min(b.trust);
if min_trust < 0.3 { continue; } // Low-trust memory isn't worth flagging
if min_trust < 0.3 {
continue;
} // Low-trust memory isn't worth flagging
let (stronger, weaker) = if a.trust >= b.trust { (a, b) } else { (b, a) };
contradictions.push(serde_json::json!({
@ -521,9 +629,12 @@ pub async fn execute(
// ====================================================================
let mut related_insights: Vec<Value> = Vec::new();
if let Ok(insights) = storage.get_insights(20) {
let memory_ids: std::collections::HashSet<&str> = scored.iter().map(|s| s.id.as_str()).collect();
let memory_ids: std::collections::HashSet<&str> =
scored.iter().map(|s| s.id.as_str()).collect();
for insight in insights {
let overlaps = insight.source_memories.iter()
let overlaps = insight
.source_memories
.iter()
.any(|src_id| memory_ids.contains(src_id.as_str()));
if overlaps {
related_insights.push(serde_json::json!({
@ -540,27 +651,35 @@ pub async fn execute(
// STAGE 7: Relation Assessment (per-pair, using trust + temporal + similarity)
// ====================================================================
let mut pair_relations: Vec<(String, f64, RelationAssessment)> = Vec::new();
if let Some(primary) = scored.iter()
if let Some(primary) = scored
.iter()
.filter(|s| !superseded_ids.contains(&s.id))
.max_by(|a, b| a.trust.partial_cmp(&b.trust).unwrap_or(std::cmp::Ordering::Equal))
.max_by(|a, b| {
a.trust
.partial_cmp(&b.trust)
.unwrap_or(std::cmp::Ordering::Equal)
})
{
for other in scored.iter().filter(|s| s.id != primary.id).take(15) {
// Use combined_score as a proxy for semantic similarity (already reranked)
// Fall back to topic_overlap for keyword-level comparison
let sim = topic_overlap(&primary.content, &other.content);
let effective_sim = if other.combined_score > 0.2 { sim.max(0.3) } else { sim };
let effective_sim = if other.combined_score > 0.2 {
sim.max(0.3)
} else {
sim
};
let rel = assess_relation(
&primary.content, &other.content,
primary.trust, other.trust,
primary.updated_at, other.updated_at,
&primary.content,
&other.content,
primary.trust,
other.trust,
primary.updated_at,
other.updated_at,
effective_sim,
);
if !matches!(rel.relation, Relation::Irrelevant) {
pair_relations.push((
other.content.chars().take(100).collect(),
other.trust,
rel,
));
pair_relations.push((other.content.chars().take(100).collect(), other.trust, rel));
}
}
}
@ -595,25 +714,32 @@ pub async fn execute(
.partial_cmp(&composite(a))
.unwrap_or(std::cmp::Ordering::Equal)
});
let evidence: Vec<Value> = non_superseded.iter()
let evidence: Vec<Value> = non_superseded
.iter()
.take(10)
.enumerate()
.map(|(i, s)| serde_json::json!({
"id": s.id,
"preview": s.content.chars().take(200).collect::<String>(),
"trust": (s.trust * 100.0).round() / 100.0,
"date": s.updated_at.to_rfc3339(),
"role": if i == 0 { "primary" } else { "supporting" },
}))
.map(|(i, s)| {
serde_json::json!({
"id": s.id,
"preview": s.content.chars().take(200).collect::<String>(),
"trust": (s.trust * 100.0).round() / 100.0,
"date": s.updated_at.to_rfc3339(),
"role": if i == 0 { "primary" } else { "supporting" },
})
})
.collect();
// Build evolution timeline
let mut evolution: Vec<Value> = by_date.iter().rev()
.map(|s| serde_json::json!({
"date": s.updated_at.format("%b %d, %Y").to_string(),
"preview": s.content.chars().take(100).collect::<String>(),
"trust": (s.trust * 100.0).round() / 100.0,
}))
let mut evolution: Vec<Value> = by_date
.iter()
.rev()
.map(|s| {
serde_json::json!({
"date": s.updated_at.format("%b %d, %Y").to_string(),
"preview": s.content.chars().take(100).collect::<String>(),
"trust": (s.trust * 100.0).round() / 100.0,
})
})
.collect();
evolution.truncate(15); // cap timeline length
@ -639,12 +765,15 @@ pub async fn execute(
if contradictions.is_empty() {
format!(
"High confidence ({:.0}%). Recommended memory (trust {:.0}%, {}) is the most reliable source.",
confidence * 100.0, rec.trust * 100.0, rec.updated_at.format("%b %d, %Y")
confidence * 100.0,
rec.trust * 100.0,
rec.updated_at.format("%b %d, %Y")
)
} else {
format!(
"WARNING: {} contradiction(s) detected. Recommended memory has trust {:.0}% but conflicts exist. Review contradictions below.",
contradictions.len(), rec.trust * 100.0
contradictions.len(),
rec.trust * 100.0
)
}
} else {
@ -683,11 +812,21 @@ pub async fn execute(
});
}
if !evidence.is_empty() { response["evidence"] = serde_json::json!(evidence); }
if !contradictions.is_empty() { response["contradictions"] = serde_json::json!(contradictions); }
if !superseded.is_empty() { response["superseded"] = serde_json::json!(superseded); }
if !evolution.is_empty() { response["evolution"] = serde_json::json!(evolution); }
if !related_insights.is_empty() { response["related_insights"] = serde_json::json!(related_insights); }
if !evidence.is_empty() {
response["evidence"] = serde_json::json!(evidence);
}
if !contradictions.is_empty() {
response["contradictions"] = serde_json::json!(contradictions);
}
if !superseded.is_empty() {
response["superseded"] = serde_json::json!(superseded);
}
if !evolution.is_empty() {
response["evolution"] = serde_json::json!(evolution);
}
if !related_insights.is_empty() {
response["related_insights"] = serde_json::json!(related_insights);
}
Ok(response)
}
@ -849,7 +988,11 @@ mod tests {
fn test_trust_score_medium() {
// Medium everything
let trust = compute_trust(0.6, 15.0, 5, 2);
assert!(trust > 0.4 && trust < 0.7, "Expected 0.4-0.7, got {}", trust);
assert!(
trust > 0.4 && trust < 0.7,
"Expected 0.4-0.7, got {}",
trust
);
}
#[test]
@ -861,7 +1004,10 @@ mod tests {
#[test]
fn test_contradiction_requires_shared_words() {
assert!(!appears_contradictory("not sure about weather", "Rust is fast"));
assert!(!appears_contradictory(
"not sure about weather",
"Rust is fast"
));
}
#[test]
@ -874,7 +1020,10 @@ mod tests {
#[test]
fn test_topic_overlap_similar() {
let overlap = topic_overlap("Vestige uses USearch for vector search", "Vestige vector search powered by USearch HNSW");
let overlap = topic_overlap(
"Vestige uses USearch for vector search",
"Vestige vector search powered by USearch HNSW",
);
assert!(overlap > 0.3);
}
@ -895,32 +1044,62 @@ mod tests {
#[test]
fn test_intent_fact_check() {
assert_eq!(classify_intent("Is it true that Vestige uses USearch?"), QueryIntent::FactCheck);
assert_eq!(classify_intent("Did I switch to port 3002?"), QueryIntent::FactCheck);
assert_eq!(classify_intent("Should I use prefix caching?"), QueryIntent::FactCheck);
assert_eq!(
classify_intent("Is it true that Vestige uses USearch?"),
QueryIntent::FactCheck
);
assert_eq!(
classify_intent("Did I switch to port 3002?"),
QueryIntent::FactCheck
);
assert_eq!(
classify_intent("Should I use prefix caching?"),
QueryIntent::FactCheck
);
}
#[test]
fn test_intent_timeline() {
assert_eq!(classify_intent("When did the port change happen?"), QueryIntent::Timeline);
assert_eq!(classify_intent("How has the AIMO3 score evolved over time?"), QueryIntent::Timeline);
assert_eq!(
classify_intent("When did the port change happen?"),
QueryIntent::Timeline
);
assert_eq!(
classify_intent("How has the AIMO3 score evolved over time?"),
QueryIntent::Timeline
);
}
#[test]
fn test_intent_root_cause() {
assert_eq!(classify_intent("Why did the build fail?"), QueryIntent::RootCause);
assert_eq!(classify_intent("What caused the score regression?"), QueryIntent::RootCause);
assert_eq!(
classify_intent("Why did the build fail?"),
QueryIntent::RootCause
);
assert_eq!(
classify_intent("What caused the score regression?"),
QueryIntent::RootCause
);
}
#[test]
fn test_intent_comparison() {
assert_eq!(classify_intent("How does USearch differ from FAISS?"), QueryIntent::Comparison);
assert_eq!(classify_intent("Compare FSRS versus SM-2"), QueryIntent::Comparison);
assert_eq!(
classify_intent("How does USearch differ from FAISS?"),
QueryIntent::Comparison
);
assert_eq!(
classify_intent("Compare FSRS versus SM-2"),
QueryIntent::Comparison
);
}
#[test]
fn test_intent_synthesis_default() {
assert_eq!(classify_intent("Tell me about Sam's projects"), QueryIntent::Synthesis);
assert_eq!(
classify_intent("Tell me about Sam's projects"),
QueryIntent::Synthesis
);
assert_eq!(classify_intent("What is Vestige?"), QueryIntent::Synthesis);
}
@ -928,8 +1107,15 @@ mod tests {
#[test]
fn test_relation_irrelevant() {
let rel = assess_relation("Rust is fast", "The weather is nice", 0.8, 0.8,
Utc::now(), Utc::now(), 0.05);
let rel = assess_relation(
"Rust is fast",
"The weather is nice",
0.8,
0.8,
Utc::now(),
Utc::now(),
0.05,
);
assert!(matches!(rel.relation, Relation::Irrelevant));
}
@ -938,7 +1124,12 @@ mod tests {
let rel = assess_relation(
"Vestige uses USearch for vector search",
"USearch provides fast HNSW indexing for Vestige",
0.8, 0.7, Utc::now(), Utc::now(), 0.6);
0.8,
0.7,
Utc::now(),
Utc::now(),
0.6,
);
assert!(matches!(rel.relation, Relation::Supports));
}
@ -947,7 +1138,12 @@ mod tests {
let rel = assess_relation(
"Don't use FAISS for vector search in production anymore",
"Use FAISS for vector search in production always",
0.8, 0.5, Utc::now(), Utc::now(), 0.7);
0.8,
0.5,
Utc::now(),
Utc::now(),
0.7,
);
assert!(matches!(rel.relation, Relation::Contradicts));
}
}

View file

@ -9,7 +9,6 @@ use serde_json::Value;
use std::collections::HashMap;
use std::sync::Arc;
use vestige_core::Storage;
#[cfg(all(feature = "embeddings", feature = "vector-search"))]
use vestige_core::cosine_similarity;
@ -89,10 +88,7 @@ impl UnionFind {
}
}
pub async fn execute(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: DedupArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => DedupArgs {
@ -108,7 +104,6 @@ pub async fn execute(
#[cfg(all(feature = "embeddings", feature = "vector-search"))]
{
// Load all embeddings
let all_embeddings = storage
.get_all_embeddings()
@ -191,10 +186,8 @@ pub async fn execute(
}
// Only keep clusters with >1 member, sorted by size descending
let mut clusters: Vec<Vec<usize>> = cluster_map
.into_values()
.filter(|c| c.len() > 1)
.collect();
let mut clusters: Vec<Vec<usize>> =
cluster_map.into_values().filter(|c| c.len() > 1).collect();
clusters.sort_by_key(|b| std::cmp::Reverse(b.len()));
clusters.truncate(limit);

View file

@ -4,8 +4,8 @@
use std::sync::Arc;
use tokio::sync::Mutex;
use chrono::Utc;
use crate::cognitive::CognitiveEngine;
use chrono::Utc;
use vestige_core::{DreamHistoryRecord, InsightRecord, LinkType, Storage};
pub fn schema() -> serde_json::Value {
@ -34,21 +34,24 @@ pub async fn execute(
.min(500) as usize; // Cap at 500 to prevent O(N^2) hang
// v1.9.0: Waking SWR tagging — preferential replay of tagged memories (70/30 split)
let tagged_nodes = storage.get_waking_tagged_memories(memory_count as i32)
let tagged_nodes = storage
.get_waking_tagged_memories(memory_count as i32)
.unwrap_or_default();
let tagged_count = tagged_nodes.len();
// Calculate how many tagged vs random to include
let tagged_target = (memory_count * 7 / 10).min(tagged_count); // 70% tagged
let _random_target = memory_count.saturating_sub(tagged_target); // 30% random (used for logging)
let _random_target = memory_count.saturating_sub(tagged_target); // 30% random (used for logging)
// Build the dream memory set: tagged memories first, then fill with random
let tagged_ids: std::collections::HashSet<String> = tagged_nodes.iter()
let tagged_ids: std::collections::HashSet<String> = tagged_nodes
.iter()
.take(tagged_target)
.map(|n| n.id.clone())
.collect();
let random_nodes = storage.get_all_nodes(memory_count as i32, 0)
let random_nodes = storage
.get_all_nodes(memory_count as i32, 0)
.map_err(|e| format!("Failed to load memories: {}", e))?;
let mut all_nodes: Vec<_> = tagged_nodes.into_iter().take(tagged_target).collect();
@ -59,8 +62,10 @@ pub async fn execute(
}
// If still under capacity (e.g., all memories are tagged), fill from remaining tagged
if all_nodes.len() < memory_count {
let used_ids: std::collections::HashSet<String> = all_nodes.iter().map(|n| n.id.clone()).collect();
let remaining_tagged = storage.get_waking_tagged_memories(memory_count as i32)
let used_ids: std::collections::HashSet<String> =
all_nodes.iter().map(|n| n.id.clone()).collect();
let remaining_tagged = storage
.get_waking_tagged_memories(memory_count as i32)
.unwrap_or_default();
for node in remaining_tagged {
if !used_ids.contains(&node.id) && all_nodes.len() < memory_count {
@ -77,16 +82,17 @@ pub async fn execute(
}));
}
let dream_memories: Vec<vestige_core::DreamMemory> = all_nodes.iter().map(|n| {
vestige_core::DreamMemory {
let dream_memories: Vec<vestige_core::DreamMemory> = all_nodes
.iter()
.map(|n| vestige_core::DreamMemory {
id: n.id.clone(),
content: n.content.clone(),
embedding: storage.get_node_embedding(&n.id).ok().flatten(),
tags: n.tags.clone(),
created_at: n.created_at,
access_count: n.reps as u32,
}
}).collect();
})
.collect();
let cog = cognitive.lock().await;
// Capture start time before the dream so we can identify newly discovered
@ -259,17 +265,18 @@ mod tests {
async fn ingest_n_memories(storage: &Arc<Storage>, n: usize) {
for i in 0..n {
storage.ingest(vestige_core::IngestInput {
content: format!("Dream test memory number {}", i),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["dream-test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap();
storage
.ingest(vestige_core::IngestInput {
content: format!("Dream test memory number {}", i),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["dream-test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap();
}
}
@ -368,7 +375,10 @@ mod tests {
// After dream: dream history should exist
{
let last = storage.get_last_dream().unwrap();
assert!(last.is_some(), "Dream should have been persisted to database");
assert!(
last.is_some(),
"Dream should have been persisted to database"
);
}
}
@ -379,20 +389,28 @@ mod tests {
// Create enough diverse memories to trigger connection discovery
for i in 0..15 {
storage.ingest(vestige_core::IngestInput {
content: format!(
"Memory {} about topic {}: detailed content for connection discovery",
i,
if i % 3 == 0 { "rust" } else if i % 3 == 1 { "cargo" } else { "testing" }
),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["dream-roundtrip".to_string()],
valid_from: None,
valid_until: None,
}).unwrap();
storage
.ingest(vestige_core::IngestInput {
content: format!(
"Memory {} about topic {}: detailed content for connection discovery",
i,
if i % 3 == 0 {
"rust"
} else if i % 3 == 1 {
"cargo"
} else {
"testing"
}
),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["dream-roundtrip".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap();
}
let cognitive = test_cognitive();
@ -403,7 +421,10 @@ mod tests {
if persisted > 0 {
// Verify connections are queryable from storage
let all_conns = storage.get_all_connections().unwrap();
assert!(!all_conns.is_empty(), "Persisted connections should be queryable");
assert!(
!all_conns.is_empty(),
"Persisted connections should be queryable"
);
// Verify connection IDs reference valid memories
let all_nodes = storage.get_all_nodes(100, 0).unwrap();
@ -425,7 +446,9 @@ mod tests {
// Verify live cognitive engine was hydrated
let cog = cognitive.lock().await;
let first_conn = &all_conns[0];
let assocs = cog.activation_network.get_associations(&first_conn.source_id);
let assocs = cog
.activation_network
.get_associations(&first_conn.source_id);
assert!(
!assocs.is_empty(),
"Live cognitive engine should have been hydrated with dream connections"
@ -441,16 +464,18 @@ mod tests {
// Ingest memories and collect their IDs
let mut ids = Vec::new();
for i in 0..5 {
let result = storage.ingest(vestige_core::IngestInput {
content: format!("Save connection test memory {}", i),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["save-conn-test".to_string()],
valid_from: None,
valid_until: None,
}).unwrap();
let result = storage
.ingest(vestige_core::IngestInput {
content: format!("Save connection test memory {}", i),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["save-conn-test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap();
ids.push(result.id);
}
@ -459,7 +484,7 @@ mod tests {
let mut saved = 0u32;
let mut errors = Vec::new();
for i in 0..ids.len() {
for j in (i+1)..ids.len() {
for j in (i + 1)..ids.len() {
let record = vestige_core::ConnectionRecord {
source_id: ids[i].clone(),
target_id: ids[j].clone(),
@ -471,10 +496,7 @@ mod tests {
};
match storage.save_connection(&record) {
Ok(_) => saved += 1,
Err(e) => errors.push(format!(
"{} -> {}: {}",
ids[i], ids[j], e
)),
Err(e) => errors.push(format!("{} -> {}: {}", ids[i], ids[j], e)),
}
}
}
@ -510,34 +532,62 @@ mod tests {
// Ingest memories with known high-similarity content (shared tags + similar text)
let topics = [
("Rust borrow checker prevents data races at compile time", vec!["rust", "safety"]),
("Rust ownership model ensures memory safety without GC", vec!["rust", "safety"]),
("Cargo is the Rust package manager and build system", vec!["rust", "cargo"]),
("Cargo.toml defines dependencies for Rust projects", vec!["rust", "cargo"]),
("Unit tests in Rust use #[test] attribute", vec!["rust", "testing"]),
("Integration tests in Rust live in the tests/ directory", vec!["rust", "testing"]),
("Clippy is a Rust linter that catches common mistakes", vec!["rust", "tooling"]),
("Rustfmt formats Rust code according to style guidelines", vec!["rust", "tooling"]),
(
"Rust borrow checker prevents data races at compile time",
vec!["rust", "safety"],
),
(
"Rust ownership model ensures memory safety without GC",
vec!["rust", "safety"],
),
(
"Cargo is the Rust package manager and build system",
vec!["rust", "cargo"],
),
(
"Cargo.toml defines dependencies for Rust projects",
vec!["rust", "cargo"],
),
(
"Unit tests in Rust use #[test] attribute",
vec!["rust", "testing"],
),
(
"Integration tests in Rust live in the tests/ directory",
vec!["rust", "testing"],
),
(
"Clippy is a Rust linter that catches common mistakes",
vec!["rust", "tooling"],
),
(
"Rustfmt formats Rust code according to style guidelines",
vec!["rust", "tooling"],
),
];
for (content, tags) in &topics {
storage.ingest(vestige_core::IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: tags.iter().map(|t| t.to_string()).collect(),
valid_from: None,
valid_until: None,
}).unwrap();
storage
.ingest(vestige_core::IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: tags.iter().map(|t| t.to_string()).collect(),
valid_from: None,
valid_until: None,
})
.unwrap();
}
let cognitive = test_cognitive();
let result = execute(&storage, &cognitive, None).await.unwrap();
assert_eq!(result["status"], "dreamed");
let found = result["stats"]["new_connections_found"].as_u64().unwrap_or(0);
let found = result["stats"]["new_connections_found"]
.as_u64()
.unwrap_or(0);
let persisted = result["connectionsPersisted"].as_u64().unwrap_or(0);
// Dream should discover connections between these related memories
@ -572,23 +622,52 @@ mod tests {
// Create diverse tagged memories to encourage insight generation
let topics = [
("Rust borrow checker prevents data races", vec!["rust", "safety"]),
("Rust ownership model ensures memory safety", vec!["rust", "safety"]),
("Cargo manages Rust project dependencies", vec!["rust", "cargo"]),
("Cargo.toml defines project configuration", vec!["rust", "cargo"]),
("Unit tests use the #[test] attribute", vec!["rust", "testing"]),
("Integration tests live in the tests directory", vec!["rust", "testing"]),
("Clippy catches common Rust mistakes", vec!["rust", "tooling"]),
("Rustfmt automatically formats code", vec!["rust", "tooling"]),
(
"Rust borrow checker prevents data races",
vec!["rust", "safety"],
),
(
"Rust ownership model ensures memory safety",
vec!["rust", "safety"],
),
(
"Cargo manages Rust project dependencies",
vec!["rust", "cargo"],
),
(
"Cargo.toml defines project configuration",
vec!["rust", "cargo"],
),
(
"Unit tests use the #[test] attribute",
vec!["rust", "testing"],
),
(
"Integration tests live in the tests directory",
vec!["rust", "testing"],
),
(
"Clippy catches common Rust mistakes",
vec!["rust", "tooling"],
),
(
"Rustfmt automatically formats code",
vec!["rust", "tooling"],
),
];
for (content, tags) in &topics {
storage.ingest(vestige_core::IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None, sentiment_score: 0.0, sentiment_magnitude: 0.0,
tags: tags.iter().map(|t| t.to_string()).collect(),
valid_from: None, valid_until: None,
}).unwrap();
storage
.ingest(vestige_core::IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: tags.iter().map(|t| t.to_string()).collect(),
valid_from: None,
valid_until: None,
})
.unwrap();
}
let result = execute(&storage, &test_cognitive(), None).await.unwrap();
@ -599,19 +678,30 @@ mod tests {
// If insights were generated, they should be persisted
if !response_insights.is_empty() {
assert!(persisted_count > 0, "Generated insights should be persisted to database");
assert!(
persisted_count > 0,
"Generated insights should be persisted to database"
);
let stored = storage.get_insights(100).unwrap();
assert_eq!(
stored.len(), persisted_count as usize,
"All {} persisted insights should be retrievable", persisted_count
stored.len(),
persisted_count as usize,
"All {} persisted insights should be retrievable",
persisted_count
);
// Verify insight fields
for insight in &stored {
assert!(!insight.id.is_empty(), "Insight ID should not be empty");
assert!(!insight.insight.is_empty(), "Insight text should not be empty");
assert!(
!insight.insight.is_empty(),
"Insight text should not be empty"
);
assert!(insight.confidence >= 0.0 && insight.confidence <= 1.0);
assert!(insight.novelty_score >= 0.0);
assert!(insight.feedback.is_none(), "Fresh insight should have no feedback");
assert!(
insight.feedback.is_none(),
"Fresh insight should have no feedback"
);
assert_eq!(insight.applied_count, 0);
}
}

View file

@ -5,8 +5,8 @@ use std::sync::Arc;
use tokio::sync::Mutex;
use crate::cognitive::CognitiveEngine;
use vestige_core::advanced::{Connection, ConnectionType, MemoryChainBuilder, MemoryNode};
use vestige_core::Storage;
use vestige_core::advanced::{Connection, ConnectionType, MemoryChainBuilder, MemoryNode};
pub fn schema() -> serde_json::Value {
serde_json::json!({
@ -41,8 +41,14 @@ pub async fn execute(
args: Option<serde_json::Value>,
) -> Result<serde_json::Value, String> {
let args = args.ok_or("Missing arguments")?;
let action = args.get("action").and_then(|v| v.as_str()).ok_or("Missing 'action'")?;
let from = args.get("from").and_then(|v| v.as_str()).ok_or("Missing 'from'")?;
let action = args
.get("action")
.and_then(|v| v.as_str())
.ok_or("Missing 'action'")?;
let from = args
.get("from")
.and_then(|v| v.as_str())
.ok_or("Missing 'from'")?;
let to = args.get("to").and_then(|v| v.as_str());
let limit = args.get("limit").and_then(|v| v.as_u64()).unwrap_or(10) as usize;
@ -64,36 +70,34 @@ pub async fn execute(
};
match chain_opt {
Some(chain) => {
Ok(serde_json::json!({
"action": "chain",
"from": from_owned,
"to": to_owned,
"steps": chain.steps.iter().map(|s| serde_json::json!({
"memory_id": s.memory_id,
"memory_preview": s.memory_preview,
"connection_type": format!("{:?}", s.connection_type),
"connection_strength": s.connection_strength,
"reasoning": s.reasoning,
})).collect::<Vec<_>>(),
"confidence": chain.confidence,
"total_hops": chain.total_hops,
}))
}
None => {
Ok(serde_json::json!({
"action": "chain",
"from": from_owned,
"to": to_owned,
"steps": [],
"message": "No chain found between these memories"
}))
}
Some(chain) => Ok(serde_json::json!({
"action": "chain",
"from": from_owned,
"to": to_owned,
"steps": chain.steps.iter().map(|s| serde_json::json!({
"memory_id": s.memory_id,
"memory_preview": s.memory_preview,
"connection_type": format!("{:?}", s.connection_type),
"connection_strength": s.connection_strength,
"reasoning": s.reasoning,
})).collect::<Vec<_>>(),
"confidence": chain.confidence,
"total_hops": chain.total_hops,
})),
None => Ok(serde_json::json!({
"action": "chain",
"from": from_owned,
"to": to_owned,
"steps": [],
"message": "No chain found between these memories"
})),
}
}
"associations" => {
let activation_assocs = cog.activation_network.get_associations(from);
let hippocampal_assocs = cog.hippocampal_index.get_associations(from, 2)
let hippocampal_assocs = cog
.hippocampal_index
.get_associations(from, 2)
.unwrap_or_default();
let from_owned = from.to_string();
drop(cog); // release lock consistently (matches chain/bridges pattern)
@ -120,20 +124,22 @@ pub async fn execute(
all_associations.truncate(limit);
// Fallback: if in-memory modules are empty, query storage directly
if all_associations.is_empty() && let Ok(connections) = storage.get_connections_for_memory(&from_owned) {
for conn in connections.iter().take(limit) {
let other_id = if conn.source_id == from_owned {
&conn.target_id
} else {
&conn.source_id
};
all_associations.push(serde_json::json!({
"memory_id": other_id,
"strength": conn.strength,
"link_type": conn.link_type,
"source": "persistent_graph",
}));
}
if all_associations.is_empty()
&& let Ok(connections) = storage.get_connections_for_memory(&from_owned)
{
for conn in connections.iter().take(limit) {
let other_id = if conn.source_id == from_owned {
&conn.target_id
} else {
&conn.source_id
};
all_associations.push(serde_json::json!({
"memory_id": other_id,
"strength": conn.strength,
"link_type": conn.link_type,
"source": "persistent_graph",
}));
}
}
Ok(serde_json::json!({
@ -167,12 +173,19 @@ pub async fn execute(
"count": limited.len(),
}))
}
_ => Err(format!("Unknown action: '{}'. Expected: chain, associations, bridges", action)),
_ => Err(format!(
"Unknown action: '{}'. Expected: chain, associations, bridges",
action
)),
}
}
/// Build a temporary MemoryChainBuilder from persisted connections for fallback queries.
fn build_temp_chain_builder(storage: &Arc<Storage>, from_id: &str, to_id: &str) -> MemoryChainBuilder {
fn build_temp_chain_builder(
storage: &Arc<Storage>,
from_id: &str,
to_id: &str,
) -> MemoryChainBuilder {
let mut builder = MemoryChainBuilder::new();
// Load connections involving either endpoint
@ -191,7 +204,9 @@ fn build_temp_chain_builder(storage: &Arc<Storage>, from_id: &str, to_id: &str)
let mut seen_ids = std::collections::HashSet::new();
for conn in &all_conns {
for id in [&conn.source_id, &conn.target_id] {
if seen_ids.insert(id.clone()) && let Ok(Some(node)) = storage.get_node(id) {
if seen_ids.insert(id.clone())
&& let Ok(Some(node)) = storage.get_node(id)
{
builder.add_memory(MemoryNode {
id: node.id.clone(),
content_preview: node.content.chars().take(100).collect(),
@ -389,39 +404,47 @@ mod tests {
let (storage, _dir) = test_storage().await;
// Create two memories and a direct connection in storage
let id1 = storage.ingest(vestige_core::IngestInput {
content: "Memory about Rust".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
}).unwrap().id;
let id1 = storage
.ingest(vestige_core::IngestInput {
content: "Memory about Rust".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap()
.id;
let id2 = storage.ingest(vestige_core::IngestInput {
content: "Memory about Cargo".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
}).unwrap().id;
let id2 = storage
.ingest(vestige_core::IngestInput {
content: "Memory about Cargo".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap()
.id;
// Save connection directly to storage (bypassing cognitive engine)
let now = chrono::Utc::now();
storage.save_connection(&vestige_core::ConnectionRecord {
source_id: id1.clone(),
target_id: id2.clone(),
strength: 0.9,
link_type: "semantic".to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
}).unwrap();
storage
.save_connection(&vestige_core::ConnectionRecord {
source_id: id1.clone(),
target_id: id2.clone(),
strength: 0.9,
link_type: "semantic".to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
})
.unwrap();
// Execute with empty cognitive engine — should fall back to storage
let cognitive = test_cognitive();
@ -447,22 +470,36 @@ mod tests {
// Create 3 memories: A -> B -> C
let make = |content: &str| vestige_core::IngestInput {
content: content.to_string(), node_type: "fact".to_string(),
source: None, sentiment_score: 0.0, sentiment_magnitude: 0.0,
tags: vec!["test".to_string()], valid_from: None, valid_until: None,
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
};
let id_a = storage.ingest(make("Memory A about databases")).unwrap().id;
let id_b = storage.ingest(make("Memory B about indexes")).unwrap().id;
let id_c = storage.ingest(make("Memory C about performance")).unwrap().id;
let id_c = storage
.ingest(make("Memory C about performance"))
.unwrap()
.id;
// Save connections A->B and B->C to storage
let now = chrono::Utc::now();
for (src, tgt) in [(&id_a, &id_b), (&id_b, &id_c)] {
storage.save_connection(&vestige_core::ConnectionRecord {
source_id: src.clone(), target_id: tgt.clone(),
strength: 0.9, link_type: "semantic".to_string(),
created_at: now, last_activated: now, activation_count: 1,
}).unwrap();
storage
.save_connection(&vestige_core::ConnectionRecord {
source_id: src.clone(),
target_id: tgt.clone(),
strength: 0.9,
link_type: "semantic".to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
})
.unwrap();
}
// Execute chain with empty cognitive engine — should fall back to storage
@ -472,7 +509,10 @@ mod tests {
let value = result.unwrap();
assert_eq!(value["action"], "chain");
let steps = value["steps"].as_array().unwrap();
assert!(!steps.is_empty(), "Chain should find path A->B->C via storage fallback");
assert!(
!steps.is_empty(),
"Chain should find path A->B->C via storage fallback"
);
}
#[tokio::test]
@ -481,9 +521,14 @@ mod tests {
// Create 3 memories: A -> B -> C (B is the bridge)
let make = |content: &str| vestige_core::IngestInput {
content: content.to_string(), node_type: "fact".to_string(),
source: None, sentiment_score: 0.0, sentiment_magnitude: 0.0,
tags: vec!["test".to_string()], valid_from: None, valid_until: None,
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
};
let id_a = storage.ingest(make("Bridge test memory A")).unwrap().id;
let id_b = storage.ingest(make("Bridge test memory B")).unwrap().id;
@ -491,11 +536,17 @@ mod tests {
let now = chrono::Utc::now();
for (src, tgt) in [(&id_a, &id_b), (&id_b, &id_c)] {
storage.save_connection(&vestige_core::ConnectionRecord {
source_id: src.clone(), target_id: tgt.clone(),
strength: 0.9, link_type: "semantic".to_string(),
created_at: now, last_activated: now, activation_count: 1,
}).unwrap();
storage
.save_connection(&vestige_core::ConnectionRecord {
source_id: src.clone(),
target_id: tgt.clone(),
strength: 0.9,
link_type: "semantic".to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
})
.unwrap();
}
// Execute bridges with empty cognitive engine
@ -505,6 +556,9 @@ mod tests {
let value = result.unwrap();
assert_eq!(value["action"], "bridges");
let bridges = value["bridges"].as_array().unwrap();
assert!(!bridges.is_empty(), "Should find B as bridge between A and C via storage fallback");
assert!(
!bridges.is_empty(),
"Should find B as bridge between A and C via storage fallback"
);
}
}

View file

@ -73,19 +73,23 @@ pub async fn execute_promote(
// Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
// Get node before for comparison
let before = storage.get_node(&args.id).map_err(|e| e.to_string())?
let before = storage
.get_node(&args.id)
.map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", args.id))?;
let node = storage.promote_memory(&args.id).map_err(|e| e.to_string())?;
let node = storage
.promote_memory(&args.id)
.map_err(|e| e.to_string())?;
// ====================================================================
// COGNITIVE FEEDBACK PIPELINE (promote)
// ====================================================================
if let Ok(mut cog) = cognitive.try_lock() {
// 5A. Reward signal — record positive outcome
cog.reward_signal.record_outcome(&args.id, OutcomeType::Helpful);
cog.reward_signal
.record_outcome(&args.id, OutcomeType::Helpful);
// 5B. Importance tracking — mark as helpful retrieval
cog.importance_tracker.on_retrieved(&args.id, true);
@ -143,9 +147,10 @@ pub async fn execute_demote(
// Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
// Get node before for comparison
let before = storage.get_node(&args.id).map_err(|e| e.to_string())?
let before = storage
.get_node(&args.id)
.map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", args.id))?;
let node = storage.demote_memory(&args.id).map_err(|e| e.to_string())?;
@ -155,7 +160,8 @@ pub async fn execute_demote(
// ====================================================================
if let Ok(mut cog) = cognitive.try_lock() {
// 5A. Reward signal — record negative outcome
cog.reward_signal.record_outcome(&args.id, OutcomeType::NotHelpful);
cog.reward_signal
.record_outcome(&args.id, OutcomeType::NotHelpful);
// 5B. Importance tracking — mark as unhelpful retrieval
cog.importance_tracker.on_retrieved(&args.id, false);
@ -237,8 +243,9 @@ pub async fn execute_request_feedback(
// Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
let node = storage.get_node(&args.id).map_err(|e| e.to_string())?
let node = storage
.get_node(&args.id)
.map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", args.id))?;
// Truncate content for display
@ -319,10 +326,12 @@ mod tests {
assert_eq!(schema["type"], "object");
assert!(schema["properties"]["id"].is_object());
assert!(schema["properties"]["reason"].is_object());
assert!(schema["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("id")));
assert!(
schema["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("id"))
);
}
#[test]
@ -330,10 +339,12 @@ mod tests {
let schema = demote_schema();
assert_eq!(schema["type"], "object");
assert!(schema["properties"]["id"].is_object());
assert!(schema["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("id")));
assert!(
schema["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("id"))
);
}
#[test]
@ -342,10 +353,12 @@ mod tests {
assert_eq!(schema["type"], "object");
assert!(schema["properties"]["id"].is_object());
assert!(schema["properties"]["context"].is_object());
assert!(schema["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("id")));
assert!(
schema["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("id"))
);
}
// === PROMOTE TESTS ===
@ -370,8 +383,7 @@ mod tests {
#[tokio::test]
async fn test_promote_nonexistent_node_fails() {
let (storage, _dir) = test_storage().await;
let args =
serde_json::json!({ "id": "00000000-0000-0000-0000-000000000000" });
let args = serde_json::json!({ "id": "00000000-0000-0000-0000-000000000000" });
let result = execute_promote(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Node not found"));
@ -454,8 +466,7 @@ mod tests {
#[tokio::test]
async fn test_demote_nonexistent_node_fails() {
let (storage, _dir) = test_storage().await;
let args =
serde_json::json!({ "id": "00000000-0000-0000-0000-000000000000" });
let args = serde_json::json!({ "id": "00000000-0000-0000-0000-000000000000" });
let result = execute_demote(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Node not found"));
@ -510,8 +521,7 @@ mod tests {
#[tokio::test]
async fn test_request_feedback_nonexistent_node_fails() {
let (storage, _dir) = test_storage().await;
let args =
serde_json::json!({ "id": "00000000-0000-0000-0000-000000000000" });
let args = serde_json::json!({ "id": "00000000-0000-0000-0000-000000000000" });
let result = execute_request_feedback(&storage, Some(args)).await;
assert!(result.is_err());
}

View file

@ -125,52 +125,72 @@ pub async fn execute(
storage: &Arc<Storage>,
args: Option<serde_json::Value>,
) -> Result<serde_json::Value, String> {
let depth = args.as_ref()
let depth = args
.as_ref()
.and_then(|a| a.get("depth"))
.and_then(|v| v.as_u64())
.unwrap_or(2)
.min(3) as u32;
let max_nodes = args.as_ref()
let max_nodes = args
.as_ref()
.and_then(|a| a.get("max_nodes"))
.and_then(|v| v.as_u64())
.unwrap_or(50)
.min(200) as usize;
// Determine center node
let center_id = if let Some(id) = args.as_ref().and_then(|a| a.get("center_id")).and_then(|v| v.as_str()) {
let center_id = if let Some(id) = args
.as_ref()
.and_then(|a| a.get("center_id"))
.and_then(|v| v.as_str())
{
id.to_string()
} else if let Some(query) = args.as_ref().and_then(|a| a.get("query")).and_then(|v| v.as_str()) {
} else if let Some(query) = args
.as_ref()
.and_then(|a| a.get("query"))
.and_then(|v| v.as_str())
{
// Search for center node
let results = storage.search(query, 1)
let results = storage
.search(query, 1)
.map_err(|e| format!("Search failed: {}", e))?;
results.first()
results
.first()
.map(|n| n.id.clone())
.ok_or_else(|| "No memories found matching query".to_string())?
} else {
// Default: use the most recent memory
let recent = storage.get_all_nodes(1, 0)
let recent = storage
.get_all_nodes(1, 0)
.map_err(|e| format!("Failed to get recent node: {}", e))?;
recent.first()
recent
.first()
.map(|n| n.id.clone())
.ok_or_else(|| "No memories in database".to_string())?
};
// Get subgraph
let (nodes, edges) = storage.get_memory_subgraph(&center_id, depth, max_nodes)
let (nodes, edges) = storage
.get_memory_subgraph(&center_id, depth, max_nodes)
.map_err(|e| format!("Failed to get subgraph: {}", e))?;
if nodes.is_empty() || !nodes.iter().any(|n| n.id == center_id) {
return Err(format!("Memory '{}' not found or has no accessible data", center_id));
return Err(format!(
"Memory '{}' not found or has no accessible data",
center_id
));
}
// Build index map for FR layout
let id_to_idx: std::collections::HashMap<&str, usize> = nodes.iter()
let id_to_idx: std::collections::HashMap<&str, usize> = nodes
.iter()
.enumerate()
.map(|(i, n)| (n.id.as_str(), i))
.collect();
let layout_edges: Vec<(usize, usize, f64)> = edges.iter()
let layout_edges: Vec<(usize, usize, f64)> = edges
.iter()
.filter_map(|e| {
let u = id_to_idx.get(e.source_id.as_str())?;
let v = id_to_idx.get(e.target_id.as_str())?;
@ -182,7 +202,8 @@ pub async fn execute(
let positions = fruchterman_reingold(nodes.len(), &layout_edges, 800.0, 600.0, 50);
// Build response
let nodes_json: Vec<serde_json::Value> = nodes.iter()
let nodes_json: Vec<serde_json::Value> = nodes
.iter()
.enumerate()
.map(|(i, n)| {
let (x, y) = positions.get(i).copied().unwrap_or((400.0, 300.0));
@ -199,11 +220,15 @@ pub async fn execute(
"x": (x * 100.0).round() / 100.0,
"y": (y * 100.0).round() / 100.0,
"isCenter": n.id == center_id,
// v2.0.5 Active Forgetting — dashboard uses these to dim suppressed nodes
"suppression_count": n.suppression_count,
"suppressed_at": n.suppressed_at.map(|t| t.to_rfc3339()),
})
})
.collect();
let edges_json: Vec<serde_json::Value> = edges.iter()
let edges_json: Vec<serde_json::Value> = edges
.iter()
.map(|e| {
serde_json::json!({
"source": e.source_id,
@ -293,16 +318,18 @@ mod tests {
#[tokio::test]
async fn test_graph_with_center_id() {
let (storage, _dir) = test_storage().await;
let node = storage.ingest(vestige_core::IngestInput {
content: "Graph test memory".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
}).unwrap();
let node = storage
.ingest(vestige_core::IngestInput {
content: "Graph test memory".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap();
let args = serde_json::json!({ "center_id": node.id });
let result = execute(&storage, Some(args)).await;
@ -318,16 +345,18 @@ mod tests {
#[tokio::test]
async fn test_graph_with_query() {
let (storage, _dir) = test_storage().await;
storage.ingest(vestige_core::IngestInput {
content: "Quantum computing fundamentals".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["science".to_string()],
valid_from: None,
valid_until: None,
}).unwrap();
storage
.ingest(vestige_core::IngestInput {
content: "Quantum computing fundamentals".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["science".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap();
let args = serde_json::json!({ "query": "quantum" });
let result = execute(&storage, Some(args)).await;
@ -339,16 +368,18 @@ mod tests {
#[tokio::test]
async fn test_graph_node_has_position() {
let (storage, _dir) = test_storage().await;
let node = storage.ingest(vestige_core::IngestInput {
content: "Position test memory".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
}).unwrap();
let node = storage
.ingest(vestige_core::IngestInput {
content: "Position test memory".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
})
.unwrap();
let args = serde_json::json!({ "center_id": node.id });
let result = execute(&storage, Some(args)).await.unwrap();

View file

@ -16,23 +16,28 @@ pub async fn execute(
_args: Option<serde_json::Value>,
) -> Result<serde_json::Value, String> {
// Average retention
let avg_retention = storage.get_avg_retention()
let avg_retention = storage
.get_avg_retention()
.map_err(|e| format!("Failed to get avg retention: {}", e))?;
// Retention distribution
let distribution = storage.get_retention_distribution()
let distribution = storage
.get_retention_distribution()
.map_err(|e| format!("Failed to get retention distribution: {}", e))?;
let distribution_json: serde_json::Value = distribution.iter().map(|(bucket, count)| {
serde_json::json!({ "bucket": bucket, "count": count })
}).collect();
let distribution_json: serde_json::Value = distribution
.iter()
.map(|(bucket, count)| serde_json::json!({ "bucket": bucket, "count": count }))
.collect();
// Retention trend
let trend = storage.get_retention_trend()
let trend = storage
.get_retention_trend()
.unwrap_or_else(|_| "unknown".to_string());
// Total memories and those below key thresholds
let stats = storage.get_stats()
let stats = storage
.get_stats()
.map_err(|e| format!("Failed to get stats: {}", e))?;
let below_30 = storage.count_memories_below_retention(0.3).unwrap_or(0);
@ -104,16 +109,18 @@ mod tests {
let (storage, _dir) = test_storage().await;
// Ingest some test memories
for i in 0..5 {
storage.ingest(vestige_core::IngestInput {
content: format!("Health test memory {}", i),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
}).unwrap();
storage
.ingest(vestige_core::IngestInput {
content: format!("Health test memory {}", i),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap();
}
let result = execute(&storage, None).await;
@ -127,24 +134,24 @@ mod tests {
#[tokio::test]
async fn test_health_distribution_buckets() {
let (storage, _dir) = test_storage().await;
storage.ingest(vestige_core::IngestInput {
content: "Test memory for distribution".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
}).unwrap();
storage
.ingest(vestige_core::IngestInput {
content: "Test memory for distribution".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
})
.unwrap();
let result = execute(&storage, None).await.unwrap();
let dist = result["distribution"].as_array().unwrap();
// Should have at least one bucket with data
assert!(!dist.is_empty());
let total: i64 = dist.iter()
.map(|b| b["count"].as_i64().unwrap_or(0))
.sum();
let total: i64 = dist.iter().map(|b| b["count"].as_i64().unwrap_or(0)).sum();
assert_eq!(total, 1);
}
}

View file

@ -70,7 +70,9 @@ pub async fn execute(
// Use CognitiveEngine's persistent signals (novelty/reward/attention accumulate)
let cog = cognitive.lock().await;
let score = cog.importance_signals.compute_importance(&args.content, &context);
let score = cog
.importance_signals
.compute_importance(&args.content, &context);
// Also detect emotional markers for richer output
let emotional_markers = cog.arousal_signal.detect_emotional_markers(&args.content);
@ -129,10 +131,12 @@ mod tests {
let schema = schema();
assert_eq!(schema["type"], "object");
assert!(schema["properties"]["content"].is_object());
assert!(schema["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("content")));
assert!(
schema["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("content"))
);
}
#[tokio::test]
@ -140,7 +144,12 @@ mod tests {
let storage = Arc::new(
Storage::new(Some(std::path::PathBuf::from("/tmp/test_importance.db"))).unwrap(),
);
let result = execute(&storage, &test_cognitive(), Some(serde_json::json!({ "content": "" }))).await;
let result = execute(
&storage,
&test_cognitive(),
Some(serde_json::json!({ "content": "" })),
)
.await;
assert!(result.is_err());
}

View file

@ -84,7 +84,9 @@ pub async fn execute(
if let Ok(cog) = cognitive.try_lock() {
// Full 4-channel importance scoring
let context = ImportanceContext::current();
let importance = cog.importance_signals.compute_importance(&args.content, &context);
let importance = cog
.importance_signals
.compute_importance(&args.content, &context);
importance_composite = importance.composite;
// Standalone novelty check (dopaminergic signal)
@ -136,7 +138,13 @@ pub async fn execute(
let node_type = result.node.node_type.clone();
let has_embedding = result.node.has_embedding.unwrap_or(false);
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
run_post_ingest(
cognitive,
&node_id,
&node_content,
&node_type,
importance_composite,
);
Ok(serde_json::json!({
"success": true,
@ -157,7 +165,13 @@ pub async fn execute(
let node_type = node.node_type.clone();
let has_embedding = node.has_embedding.unwrap_or(false);
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
run_post_ingest(
cognitive,
&node_id,
&node_content,
&node_type,
importance_composite,
);
Ok(serde_json::json!({
"success": true,
@ -181,7 +195,13 @@ pub async fn execute(
let node_type = node.node_type.clone();
let has_embedding = node.has_embedding.unwrap_or(false);
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
run_post_ingest(
cognitive,
&node_id,
&node_content,
&node_type,
importance_composite,
);
Ok(serde_json::json!({
"success": true,
@ -217,16 +237,13 @@ fn run_post_ingest(
cog.importance_signals.learn_content(content);
// Record in hippocampal index
let _ = cog.hippocampal_index.index_memory(
node_id,
content,
node_type,
Utc::now(),
None,
);
let _ = cog
.hippocampal_index
.index_memory(node_id, content, node_type, Utc::now(), None);
// Cross-project pattern recording
cog.cross_project.record_project_memory(node_id, "default", None);
cog.cross_project
.record_project_memory(node_id, "default", None);
}
}
@ -421,7 +438,12 @@ mod tests {
let schema_value = schema();
assert_eq!(schema_value["type"], "object");
assert!(schema_value["properties"]["content"].is_object());
assert!(schema_value["required"].as_array().unwrap().contains(&serde_json::json!("content")));
assert!(
schema_value["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("content"))
);
}
#[test]

View file

@ -265,25 +265,34 @@ async fn execute_set(
nlp_parsed = true;
// Extract trigger info from parsed intention
let (t_type, t_data) = match &parsed.trigger {
ProspectiveTrigger::TimeBased { .. } => {
("time".to_string(), serde_json::json!({"type": "time"}).to_string())
}
ProspectiveTrigger::TimeBased { .. } => (
"time".to_string(),
serde_json::json!({"type": "time"}).to_string(),
),
ProspectiveTrigger::DurationBased { after, .. } => {
let mins = after.num_minutes();
("time".to_string(), serde_json::json!({"type": "time", "in_minutes": mins}).to_string())
}
ProspectiveTrigger::EventBased { condition, .. } => {
("event".to_string(), serde_json::json!({"type": "event", "condition": condition}).to_string())
}
ProspectiveTrigger::ContextBased { context_match } => {
("context".to_string(), serde_json::json!({"type": "context", "topic": format!("{:?}", context_match)}).to_string())
}
ProspectiveTrigger::Recurring { .. } => {
("recurring".to_string(), serde_json::json!({"type": "recurring"}).to_string())
}
_ => {
("event".to_string(), serde_json::json!({"type": "event"}).to_string())
(
"time".to_string(),
serde_json::json!({"type": "time", "in_minutes": mins}).to_string(),
)
}
ProspectiveTrigger::EventBased { condition, .. } => (
"event".to_string(),
serde_json::json!({"type": "event", "condition": condition}).to_string(),
),
ProspectiveTrigger::ContextBased { context_match } => (
"context".to_string(),
serde_json::json!({"type": "context", "topic": format!("{:?}", context_match)})
.to_string(),
),
ProspectiveTrigger::Recurring { .. } => (
"recurring".to_string(),
serde_json::json!({"type": "recurring"}).to_string(),
),
_ => (
"event".to_string(),
serde_json::json!({"type": "event"}).to_string(),
),
};
nlp_trigger_type = Some(t_type);
nlp_trigger_data = Some(t_data);
@ -426,7 +435,6 @@ async fn execute_check(
let _ = cog.prospective_memory.update_context(prospective_ctx);
}
// Get active intentions
let intentions = storage.get_active_intentions().map_err(|e| e.to_string())?;
@ -521,10 +529,7 @@ async fn execute_update(
storage: &Arc<Storage>,
args: &UnifiedIntentionArgs,
) -> Result<Value, String> {
let intention_id = args
.id
.as_ref()
.ok_or("Missing 'id' for update action")?;
let intention_id = args.id.as_ref().ok_or("Missing 'id' for update action")?;
let status = args
.status
@ -690,7 +695,9 @@ mod tests {
"action": "set",
"description": description
});
let result = execute(storage, &test_cognitive(), Some(args)).await.unwrap();
let result = execute(storage, &test_cognitive(), Some(args))
.await
.unwrap();
result["intentionId"].as_str().unwrap().to_string()
}
@ -742,10 +749,12 @@ mod tests {
assert_eq!(value["success"], true);
assert_eq!(value["action"], "set");
assert!(value["intentionId"].is_string());
assert!(value["message"]
.as_str()
.unwrap()
.contains("Intention created"));
assert!(
value["message"]
.as_str()
.unwrap()
.contains("Intention created")
);
}
#[tokio::test]
@ -953,7 +962,9 @@ mod tests {
"codebase": "payments"
}
});
execute(&storage, &test_cognitive(), Some(set_args)).await.unwrap();
execute(&storage, &test_cognitive(), Some(set_args))
.await
.unwrap();
// Check with matching context
let check_args = serde_json::json!({
@ -984,7 +995,9 @@ mod tests {
"at": past_time
}
});
execute(&storage, &test_cognitive(), Some(set_args)).await.unwrap();
execute(&storage, &test_cognitive(), Some(set_args))
.await
.unwrap();
let check_args = serde_json::json!({ "action": "check" });
let result = execute(&storage, &test_cognitive(), Some(check_args)).await;
@ -1183,7 +1196,9 @@ mod tests {
"id": intention_id,
"status": "complete"
});
execute(&storage, &test_cognitive(), Some(complete_args)).await.unwrap();
execute(&storage, &test_cognitive(), Some(complete_args))
.await
.unwrap();
// Create another active one
create_test_intention(&storage, "Active task").await;
@ -1193,7 +1208,9 @@ mod tests {
"action": "list",
"filter_status": "fulfilled"
});
let result = execute(&storage, &test_cognitive(), Some(list_args)).await.unwrap();
let result = execute(&storage, &test_cognitive(), Some(list_args))
.await
.unwrap();
assert_eq!(result["total"], 1);
assert_eq!(result["status"], "fulfilled");
}
@ -1229,14 +1246,18 @@ mod tests {
"id": intention_id,
"status": "complete"
});
execute(&storage, &test_cognitive(), Some(complete_args)).await.unwrap();
execute(&storage, &test_cognitive(), Some(complete_args))
.await
.unwrap();
// List all
let list_args = serde_json::json!({
"action": "list",
"filter_status": "all"
});
let result = execute(&storage, &test_cognitive(), Some(list_args)).await.unwrap();
let result = execute(&storage, &test_cognitive(), Some(list_args))
.await
.unwrap();
assert_eq!(result["total"], 2);
}
@ -1253,7 +1274,9 @@ mod tests {
// 2. Verify it appears in list
let list_args = serde_json::json!({ "action": "list" });
let list_result = execute(&storage, &test_cognitive(), Some(list_args)).await.unwrap();
let list_result = execute(&storage, &test_cognitive(), Some(list_args))
.await
.unwrap();
assert_eq!(list_result["total"], 1);
// 3. Snooze it
@ -1277,7 +1300,9 @@ mod tests {
// 5. Verify it's no longer active
let final_list_args = serde_json::json!({ "action": "list" });
let final_list = execute(&storage, &test_cognitive(), Some(final_list_args)).await.unwrap();
let final_list = execute(&storage, &test_cognitive(), Some(final_list_args))
.await
.unwrap();
assert_eq!(final_list["total"], 0);
// 6. Verify it's in fulfilled list
@ -1285,7 +1310,9 @@ mod tests {
"action": "list",
"filter_status": "fulfilled"
});
let fulfilled_list = execute(&storage, &test_cognitive(), Some(fulfilled_args)).await.unwrap();
let fulfilled_list = execute(&storage, &test_cognitive(), Some(fulfilled_args))
.await
.unwrap();
assert_eq!(fulfilled_list["total"], 1);
}
@ -1299,25 +1326,33 @@ mod tests {
"description": "Low priority task",
"priority": "low"
});
execute(&storage, &test_cognitive(), Some(args_low)).await.unwrap();
execute(&storage, &test_cognitive(), Some(args_low))
.await
.unwrap();
let args_critical = serde_json::json!({
"action": "set",
"description": "Critical task",
"priority": "critical"
});
execute(&storage, &test_cognitive(), Some(args_critical)).await.unwrap();
execute(&storage, &test_cognitive(), Some(args_critical))
.await
.unwrap();
let args_normal = serde_json::json!({
"action": "set",
"description": "Normal task",
"priority": "normal"
});
execute(&storage, &test_cognitive(), Some(args_normal)).await.unwrap();
execute(&storage, &test_cognitive(), Some(args_normal))
.await
.unwrap();
// List and verify ordering (critical should be first due to priority DESC ordering)
let list_args = serde_json::json!({ "action": "list" });
let list_result = execute(&storage, &test_cognitive(), Some(list_args)).await.unwrap();
let list_result = execute(&storage, &test_cognitive(), Some(list_args))
.await
.unwrap();
let intentions = list_result["intentions"].as_array().unwrap();
assert!(intentions.len() >= 3);
@ -1335,10 +1370,12 @@ mod tests {
let schema_value = schema();
assert_eq!(schema_value["type"], "object");
assert!(schema_value["properties"]["action"].is_object());
assert!(schema_value["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("action")));
assert!(
schema_value["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("action"))
);
}
#[test]

View file

@ -6,7 +6,7 @@ use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use chrono::{DateTime, Utc, Duration};
use chrono::{DateTime, Duration, Utc};
use uuid::Uuid;
use vestige_core::{IntentionRecord, Storage};
@ -221,10 +221,7 @@ struct ListArgs {
}
/// Execute set_intention tool
pub async fn execute_set(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_set(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: SetIntentionArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
@ -239,7 +236,10 @@ pub async fn execute_set(
// Determine trigger type and data
let (trigger_type, trigger_data) = if let Some(trigger) = &args.trigger {
let t_type = trigger.trigger_type.clone().unwrap_or_else(|| "time".to_string());
let t_type = trigger
.trigger_type
.clone()
.unwrap_or_else(|| "time".to_string());
let data = serde_json::to_string(trigger).unwrap_or_else(|_| "{}".to_string());
(t_type, data)
} else {
@ -256,13 +256,17 @@ pub async fn execute_set(
// Parse deadline
let deadline = args.deadline.and_then(|s| {
DateTime::parse_from_rfc3339(&s).ok().map(|dt| dt.with_timezone(&Utc))
DateTime::parse_from_rfc3339(&s)
.ok()
.map(|dt| dt.with_timezone(&Utc))
});
// Calculate trigger time if specified
let trigger_at = if let Some(trigger) = &args.trigger {
if let Some(at) = &trigger.at {
DateTime::parse_from_rfc3339(at).ok().map(|dt| dt.with_timezone(&Utc))
DateTime::parse_from_rfc3339(at)
.ok()
.map(|dt| dt.with_timezone(&Utc))
} else {
trigger.in_minutes.map(|mins| now + Duration::minutes(mins))
}
@ -303,13 +307,13 @@ pub async fn execute_set(
}
/// Execute check_intentions tool
pub async fn execute_check(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_check(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: CheckIntentionsArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => CheckIntentionsArgs { context: None, include_snoozed: None },
None => CheckIntentionsArgs {
context: None,
include_snoozed: None,
},
};
let now = Utc::now();
@ -344,14 +348,20 @@ pub async fn execute_check(
Some("context") => {
if let Some(ctx) = &args.context {
// Check codebase match
if let (Some(trigger_codebase), Some(current_codebase)) = (&t.codebase, &ctx.codebase) {
current_codebase.to_lowercase().contains(&trigger_codebase.to_lowercase())
if let (Some(trigger_codebase), Some(current_codebase)) =
(&t.codebase, &ctx.codebase)
{
current_codebase
.to_lowercase()
.contains(&trigger_codebase.to_lowercase())
// Check file pattern match
} else if let (Some(pattern), Some(file)) = (&t.file_pattern, &ctx.file) {
file.contains(pattern)
// Check topic match
} else if let (Some(topic), Some(topics)) = (&t.topic, &ctx.topics) {
topics.iter().any(|t| t.to_lowercase().contains(&topic.to_lowercase()))
topics
.iter()
.any(|t| t.to_lowercase().contains(&topic.to_lowercase()))
} else {
false
}
@ -406,7 +416,8 @@ pub async fn execute_complete(
None => return Err("Missing intention_id".to_string()),
};
let updated = storage.update_intention_status(&args.intention_id, "fulfilled")
let updated = storage
.update_intention_status(&args.intention_id, "fulfilled")
.map_err(|e| e.to_string())?;
if updated {
@ -421,10 +432,7 @@ pub async fn execute_complete(
}
/// Execute snooze_intention tool
pub async fn execute_snooze(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_snooze(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: SnoozeArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing intention_id".to_string()),
@ -433,7 +441,8 @@ pub async fn execute_snooze(
let minutes = args.minutes.unwrap_or(30);
let snooze_until = Utc::now() + Duration::minutes(minutes);
let updated = storage.snooze_intention(&args.intention_id, snooze_until)
let updated = storage
.snooze_intention(&args.intention_id, snooze_until)
.map_err(|e| e.to_string())?;
if updated {
@ -449,13 +458,13 @@ pub async fn execute_snooze(
}
/// Execute list_intentions tool
pub async fn execute_list(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_list(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: ListArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => ListArgs { status: None, limit: None },
None => ListArgs {
status: None,
limit: None,
},
};
let status = args.status.as_deref().unwrap_or("active");
@ -463,15 +472,29 @@ pub async fn execute_list(
let intentions = if status == "all" {
// Get all by combining different statuses
let mut all = storage.get_active_intentions().map_err(|e| e.to_string())?;
all.extend(storage.get_intentions_by_status("fulfilled").map_err(|e| e.to_string())?);
all.extend(storage.get_intentions_by_status("cancelled").map_err(|e| e.to_string())?);
all.extend(storage.get_intentions_by_status("snoozed").map_err(|e| e.to_string())?);
all.extend(
storage
.get_intentions_by_status("fulfilled")
.map_err(|e| e.to_string())?,
);
all.extend(
storage
.get_intentions_by_status("cancelled")
.map_err(|e| e.to_string())?,
);
all.extend(
storage
.get_intentions_by_status("snoozed")
.map_err(|e| e.to_string())?,
);
all
} else if status == "active" {
// Use get_active_intentions for proper priority ordering
storage.get_active_intentions().map_err(|e| e.to_string())?
} else {
storage.get_intentions_by_status(status).map_err(|e| e.to_string())?
storage
.get_intentions_by_status(status)
.map_err(|e| e.to_string())?
};
let limit = args.limit.unwrap_or(20) as usize;
@ -574,7 +597,12 @@ mod tests {
let value = result.unwrap();
assert_eq!(value["success"], true);
assert!(value["intentionId"].is_string());
assert!(value["message"].as_str().unwrap().contains("Intention created"));
assert!(
value["message"]
.as_str()
.unwrap()
.contains("Intention created")
);
}
#[tokio::test]
@ -1017,14 +1045,24 @@ mod tests {
let schema_value = set_schema();
assert_eq!(schema_value["type"], "object");
assert!(schema_value["properties"]["description"].is_object());
assert!(schema_value["required"].as_array().unwrap().contains(&serde_json::json!("description")));
assert!(
schema_value["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("description"))
);
}
#[test]
fn test_complete_schema_has_required_fields() {
let schema_value = complete_schema();
assert!(schema_value["properties"]["intentionId"].is_object());
assert!(schema_value["required"].as_array().unwrap().contains(&serde_json::json!("intentionId")));
assert!(
schema_value["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("intentionId"))
);
}
#[test]
@ -1032,7 +1070,12 @@ mod tests {
let schema_value = snooze_schema();
assert!(schema_value["properties"]["intentionId"].is_object());
assert!(schema_value["properties"]["minutes"].is_object());
assert!(schema_value["required"].as_array().unwrap().contains(&serde_json::json!("intentionId")));
assert!(
schema_value["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("intentionId"))
);
}
#[test]

View file

@ -42,10 +42,7 @@ struct KnowledgeArgs {
id: String,
}
pub async fn execute_get(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_get(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: KnowledgeArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
@ -90,10 +87,7 @@ pub async fn execute_get(
}
}
pub async fn execute_delete(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_delete(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: KnowledgeArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),

View file

@ -159,7 +159,8 @@ pub async fn execute_system_status(
let mut recommendations = Vec::new();
if status == "critical" {
recommendations.push("CRITICAL: Many memories have very low retention. Review important memories.");
recommendations
.push("CRITICAL: Many memories have very low retention. Review important memories.");
}
if stats.nodes_due_for_review > 5 {
recommendations.push("Review due memories to strengthen retention.");
@ -253,7 +254,6 @@ pub async fn execute_system_status(
};
let last_backup = Storage::get_last_backup_timestamp();
Ok(serde_json::json!({
"tool": "system_status",
// Health
@ -336,7 +336,8 @@ pub async fn execute_health_check(
let mut recommendations = Vec::new();
if status == "critical" {
recommendations.push("CRITICAL: Many memories have very low retention. Review important memories.");
recommendations
.push("CRITICAL: Many memories have very low retention. Review important memories.");
}
if stats.nodes_due_for_review > 5 {
recommendations.push("Review due memories to strengthen retention.");
@ -505,7 +506,9 @@ pub async fn execute_stats(
})
.collect();
if !memories_for_compression.is_empty() {
let groups = cog.compressor.find_compressible_groups(&memories_for_compression);
let groups = cog
.compressor
.find_compressible_groups(&memories_for_compression);
Some(serde_json::json!({
"groupCount": groups.len(),
"totalCompressible": groups.iter().map(|g| g.len()).sum::<usize>(),
@ -566,14 +569,13 @@ pub async fn execute_stats(
}
/// Backup tool
pub async fn execute_backup(
storage: &Arc<Storage>,
_args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_backup(storage: &Arc<Storage>, _args: Option<Value>) -> Result<Value, String> {
// Determine backup path
let vestige_dir = directories::ProjectDirs::from("com", "vestige", "core")
.ok_or("Could not determine data directory")?;
let backup_dir = vestige_dir.data_dir().parent()
let backup_dir = vestige_dir
.data_dir()
.parent()
.unwrap_or(vestige_dir.data_dir())
.join("backups");
@ -585,7 +587,8 @@ pub async fn execute_backup(
// Use VACUUM INTO for a consistent backup (handles WAL properly)
{
storage.backup_to(&backup_path)
storage
.backup_to(&backup_path)
.map_err(|e| format!("Failed to create backup: {}", e))?;
}
@ -611,10 +614,7 @@ struct ExportArgs {
}
/// Export tool
pub async fn execute_export(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_export(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: ExportArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => ExportArgs {
@ -627,7 +627,10 @@ pub async fn execute_export(
let format = args.format.unwrap_or_else(|| "json".to_string());
if format != "json" && format != "jsonl" {
return Err(format!("Invalid format '{}'. Must be 'json' or 'jsonl'.", format));
return Err(format!(
"Invalid format '{}'. Must be 'json' or 'jsonl'.",
format
));
}
// Parse since date
@ -648,7 +651,9 @@ pub async fn execute_export(
let max_nodes = 100_000;
let mut offset = 0;
loop {
let batch = storage.get_all_nodes(page_size, offset).map_err(|e| e.to_string())?;
let batch = storage
.get_all_nodes(page_size, offset)
.map_err(|e| e.to_string())?;
let batch_len = batch.len();
all_nodes.extend(batch);
if batch_len < page_size as usize || all_nodes.len() >= max_nodes {
@ -661,7 +666,10 @@ pub async fn execute_export(
let filtered: Vec<&vestige_core::KnowledgeNode> = all_nodes
.iter()
.filter(|node| {
if since_date.as_ref().is_some_and(|since_dt| node.created_at < *since_dt) {
if since_date
.as_ref()
.is_some_and(|since_dt| node.created_at < *since_dt)
{
return false;
}
if !tag_filter.is_empty() {
@ -678,7 +686,9 @@ pub async fn execute_export(
// Determine export path — always constrained to vestige exports directory
let vestige_dir = directories::ProjectDirs::from("com", "vestige", "core")
.ok_or("Could not determine data directory")?;
let export_dir = vestige_dir.data_dir().parent()
let export_dir = vestige_dir
.data_dir()
.parent()
.unwrap_or(vestige_dir.data_dir())
.join("exports");
std::fs::create_dir_all(&export_dir)
@ -725,7 +735,9 @@ pub async fn execute_export(
}
writer.flush().map_err(|e| e.to_string())?;
let file_size = std::fs::metadata(&export_path).map(|m| m.len()).unwrap_or(0);
let file_size = std::fs::metadata(&export_path)
.map(|m| m.len())
.unwrap_or(0);
Ok(serde_json::json!({
"tool": "export",
@ -746,10 +758,7 @@ struct GcArgs {
}
/// Garbage collection tool
pub async fn execute_gc(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_gc(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: GcArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => GcArgs {
@ -771,7 +780,9 @@ pub async fn execute_gc(
let max_nodes = 100_000;
let mut offset = 0;
loop {
let batch = storage.get_all_nodes(page_size, offset).map_err(|e| e.to_string())?;
let batch = storage
.get_all_nodes(page_size, offset)
.map_err(|e| e.to_string())?;
let batch_len = batch.len();
all_nodes.extend(batch);
if batch_len < page_size as usize || all_nodes.len() >= max_nodes {
@ -903,16 +914,18 @@ mod tests {
async fn test_system_status_with_memories() {
let (storage, _dir) = test_storage().await;
{
storage.ingest(vestige_core::IngestInput {
content: "Test memory for status".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
}).unwrap();
storage
.ingest(vestige_core::IngestInput {
content: "Test memory for status".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
})
.unwrap();
}
let result = execute_system_status(&storage, &test_cognitive(), None).await;
assert!(result.is_ok());
@ -942,7 +955,10 @@ mod tests {
assert!(triggers.is_object(), "automationTriggers should be present");
assert!(triggers["lastDreamTimestamp"].is_null(), "No dreams yet");
assert_eq!(triggers["savesSinceLastDream"], 0, "Empty DB = 0 saves");
assert!(triggers["lastConsolidationTimestamp"].is_null(), "No consolidation yet");
assert!(
triggers["lastConsolidationTimestamp"].is_null(),
"No consolidation yet"
);
// lastBackupTimestamp depends on filesystem state, just check it exists
assert!(triggers.get("lastBackupTimestamp").is_some());
}
@ -952,16 +968,18 @@ mod tests {
let (storage, _dir) = test_storage().await;
{
for i in 0..3 {
storage.ingest(vestige_core::IngestInput {
content: format!("Automation trigger test memory {}", i),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
}).unwrap();
storage
.ingest(vestige_core::IngestInput {
content: format!("Automation trigger test memory {}", i),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
})
.unwrap();
}
}
let result = execute_system_status(&storage, &test_cognitive(), None).await;

View file

@ -75,19 +75,14 @@ pub fn stats_schema() -> Value {
}
/// Get the cognitive state of a specific memory
pub async fn execute_get(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_get(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args = args.ok_or("Missing arguments")?;
let memory_id = args["memory_id"]
.as_str()
.ok_or("memory_id is required")?;
let memory_id = args["memory_id"].as_str().ok_or("memory_id is required")?;
// Get the memory
let memory = storage.get_node(memory_id)
let memory = storage
.get_node(memory_id)
.map_err(|e| format!("Error: {}", e))?
.ok_or("Memory not found")?;
@ -128,19 +123,14 @@ pub async fn execute_get(
}
/// List memories by state
pub async fn execute_list(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_list(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args = args.unwrap_or(serde_json::json!({}));
let state_filter = args["state"].as_str();
let limit = args["limit"].as_i64().unwrap_or(20) as usize;
// Get all memories
let memories = storage.get_all_nodes(500, 0)
.map_err(|e| e.to_string())?;
let memories = storage.get_all_nodes(500, 0).map_err(|e| e.to_string())?;
// Categorize by state
let mut active = Vec::new();
@ -199,19 +189,15 @@ pub async fn execute_list(
"dormant": { "count": dormant.len(), "memories": dormant.into_iter().take(limit).collect::<Vec<_>>() },
"silent": { "count": silent.len(), "memories": silent.into_iter().take(limit).collect::<Vec<_>>() },
"unavailable": { "count": unavailable.len(), "memories": unavailable.into_iter().take(limit).collect::<Vec<_>>() }
})
}),
};
Ok(result)
}
/// Get memory state statistics
pub async fn execute_stats(
storage: &Arc<Storage>,
) -> Result<Value, String> {
let memories = storage.get_all_nodes(1000, 0)
.map_err(|e| e.to_string())?;
pub async fn execute_stats(storage: &Arc<Storage>) -> Result<Value, String> {
let memories = storage.get_all_nodes(1000, 0).map_err(|e| e.to_string())?;
let total = memories.len();
let mut active_count = 0;
@ -237,7 +223,11 @@ pub async fn execute_stats(
}
}
let avg_accessibility = if total > 0 { total_accessibility / total as f64 } else { 0.0 };
let avg_accessibility = if total > 0 {
total_accessibility / total as f64
} else {
0.0
};
Ok(serde_json::json!({
"totalMemories": total,

View file

@ -219,7 +219,6 @@ async fn execute_delete(storage: &Arc<Storage>, id: &str) -> Result<Value, Strin
/// Get accessibility state of a memory (Active/Dormant/Silent/Unavailable)
async fn execute_state(storage: &Arc<Storage>, id: &str) -> Result<Value, String> {
// Get the memory
let memory = storage
.get_node(id)
@ -270,8 +269,9 @@ async fn execute_promote(
id: &str,
reason: Option<String>,
) -> Result<Value, String> {
let before = storage.get_node(id).map_err(|e| e.to_string())?
let before = storage
.get_node(id)
.map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", id))?;
let node = storage.promote_memory(id).map_err(|e| e.to_string())?;
@ -325,15 +325,17 @@ async fn execute_demote(
id: &str,
reason: Option<String>,
) -> Result<Value, String> {
let before = storage.get_node(id).map_err(|e| e.to_string())?
let before = storage
.get_node(id)
.map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", id))?;
let node = storage.demote_memory(id).map_err(|e| e.to_string())?;
// Cognitive feedback pipeline
if let Ok(mut cog) = cognitive.try_lock() {
cog.reward_signal.record_outcome(id, OutcomeType::NotHelpful);
cog.reward_signal
.record_outcome(id, OutcomeType::NotHelpful);
cog.importance_tracker.on_retrieved(id, false);
if cog.reconsolidation.is_labile(id) {
cog.reconsolidation.apply_modification(
@ -429,22 +431,34 @@ mod tests {
// Test Active state
let accessibility = compute_accessibility(0.9, 0.8, 0.7);
assert!(accessibility >= ACCESSIBILITY_ACTIVE);
assert!(matches!(state_from_accessibility(accessibility), MemoryState::Active));
assert!(matches!(
state_from_accessibility(accessibility),
MemoryState::Active
));
// Test Dormant state
let accessibility = compute_accessibility(0.5, 0.5, 0.5);
assert!(accessibility >= ACCESSIBILITY_DORMANT && accessibility < ACCESSIBILITY_ACTIVE);
assert!(matches!(state_from_accessibility(accessibility), MemoryState::Dormant));
assert!((ACCESSIBILITY_DORMANT..ACCESSIBILITY_ACTIVE).contains(&accessibility));
assert!(matches!(
state_from_accessibility(accessibility),
MemoryState::Dormant
));
// Test Silent state
let accessibility = compute_accessibility(0.2, 0.2, 0.2);
assert!(accessibility >= ACCESSIBILITY_SILENT && accessibility < ACCESSIBILITY_DORMANT);
assert!(matches!(state_from_accessibility(accessibility), MemoryState::Silent));
assert!((ACCESSIBILITY_SILENT..ACCESSIBILITY_DORMANT).contains(&accessibility));
assert!(matches!(
state_from_accessibility(accessibility),
MemoryState::Silent
));
// Test Unavailable state
let accessibility = compute_accessibility(0.05, 0.05, 0.05);
assert!(accessibility < ACCESSIBILITY_SILENT);
assert!(matches!(state_from_accessibility(accessibility), MemoryState::Unavailable));
assert!(matches!(
state_from_accessibility(accessibility),
MemoryState::Unavailable
));
}
#[test]
@ -538,7 +552,8 @@ mod tests {
#[tokio::test]
async fn test_get_nonexistent_memory() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({ "action": "get", "id": "00000000-0000-0000-0000-000000000000" });
let args =
serde_json::json!({ "action": "get", "id": "00000000-0000-0000-0000-000000000000" });
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
@ -562,13 +577,17 @@ mod tests {
async fn test_delete_nonexistent_memory() {
let (storage, _dir) = test_storage().await;
// Ingest+delete a throwaway memory to warm writer after WAL migration
let warmup_id = storage.ingest(vestige_core::IngestInput {
content: "warmup".to_string(),
node_type: "fact".to_string(),
..Default::default()
}).unwrap().id;
let warmup_id = storage
.ingest(vestige_core::IngestInput {
content: "warmup".to_string(),
node_type: "fact".to_string(),
..Default::default()
})
.unwrap()
.id;
let _ = storage.delete_node(&warmup_id);
let args = serde_json::json!({ "action": "delete", "id": "00000000-0000-0000-0000-000000000000" });
let args =
serde_json::json!({ "action": "delete", "id": "00000000-0000-0000-0000-000000000000" });
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
@ -581,7 +600,9 @@ mod tests {
let (storage, _dir) = test_storage().await;
let id = ingest_memory(&storage).await;
let del_args = serde_json::json!({ "action": "delete", "id": id });
execute(&storage, &test_cognitive(), Some(del_args)).await.unwrap();
execute(&storage, &test_cognitive(), Some(del_args))
.await
.unwrap();
let get_args = serde_json::json!({ "action": "get", "id": id });
let result = execute(&storage, &test_cognitive(), Some(get_args)).await;
let value = result.unwrap();
@ -612,7 +633,8 @@ mod tests {
#[tokio::test]
async fn test_state_nonexistent_memory_fails() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({ "action": "state", "id": "00000000-0000-0000-0000-000000000000" });
let args =
serde_json::json!({ "action": "state", "id": "00000000-0000-0000-0000-000000000000" });
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("not found"));
@ -629,7 +651,10 @@ mod tests {
fn test_accessibility_boundary_zero() {
let a = compute_accessibility(0.0, 0.0, 0.0);
assert_eq!(a, 0.0);
assert!(matches!(state_from_accessibility(a), MemoryState::Unavailable));
assert!(matches!(
state_from_accessibility(a),
MemoryState::Unavailable
));
}
// ========================================================================
@ -708,7 +733,8 @@ mod tests {
#[tokio::test]
async fn test_demote_nonexistent_node_fails() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({ "action": "demote", "id": "00000000-0000-0000-0000-000000000000" });
let args =
serde_json::json!({ "action": "demote", "id": "00000000-0000-0000-0000-000000000000" });
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Node not found"));
@ -761,9 +787,24 @@ mod tests {
assert_eq!(value["success"], true);
assert_eq!(value["action"], "edit");
assert_eq!(value["nodeId"], id);
assert!(value["oldContentPreview"].as_str().unwrap().contains("Memory unified test content"));
assert!(value["newContentPreview"].as_str().unwrap().contains("Updated memory content"));
assert!(value["note"].as_str().unwrap().contains("FSRS state preserved"));
assert!(
value["oldContentPreview"]
.as_str()
.unwrap()
.contains("Memory unified test content")
);
assert!(
value["newContentPreview"]
.as_str()
.unwrap()
.contains("Updated memory content")
);
assert!(
value["note"]
.as_str()
.unwrap()
.contains("FSRS state preserved")
);
}
#[tokio::test]
@ -780,7 +821,9 @@ mod tests {
"id": id,
"content": "Completely new content after edit"
});
execute(&storage, &test_cognitive(), Some(args)).await.unwrap();
execute(&storage, &test_cognitive(), Some(args))
.await
.unwrap();
// Verify FSRS state preserved
let after = storage.get_node(&id).unwrap().unwrap();

View file

@ -34,12 +34,15 @@ pub mod restore;
pub mod session_context;
// v1.9: Autonomic tools
pub mod health;
pub mod graph;
pub mod health;
// v2.1: Cross-reference (connect the dots)
pub mod cross_reference;
// v2.0.5: Active Forgetting — Anderson 2025 + Davis Rac1
pub mod suppress;
// Deprecated/internal tools — not advertised in the public MCP tools/list,
// but some functions are actively dispatched for backwards compatibility
// and internal cognitive operations. #[allow(dead_code)] suppresses warnings

View file

@ -57,23 +57,30 @@ pub async fn execute(
project_context: context
.and_then(|c| c.get("codebase"))
.and_then(|v| v.as_str())
.map(|name| vestige_core::neuroscience::predictive_retrieval::ProjectContext {
name: name.to_string(),
path: String::new(),
technologies: Vec::new(),
primary_language: None,
}),
.map(
|name| vestige_core::neuroscience::predictive_retrieval::ProjectContext {
name: name.to_string(),
path: String::new(),
technologies: Vec::new(),
primary_language: None,
},
),
};
// Get predictions from predictive memory
let predictions = cog.predictive_memory.predict_needed_memories(&session_ctx)
let predictions = cog
.predictive_memory
.predict_needed_memories(&session_ctx)
.unwrap_or_default();
let suggestions = cog.predictive_memory.get_proactive_suggestions(0.3)
let suggestions = cog
.predictive_memory
.get_proactive_suggestions(0.3)
.unwrap_or_default();
let top_interests = cog.predictive_memory.get_top_interests(10)
let top_interests = cog
.predictive_memory
.get_top_interests(10)
.unwrap_or_default();
let accuracy = cog.predictive_memory.prediction_accuracy()
.unwrap_or(0.0);
let accuracy = cog.predictive_memory.prediction_accuracy().unwrap_or(0.0);
// Build speculative context
let speculative_context = vestige_core::PredictionContext {
@ -91,7 +98,9 @@ pub async fn execute(
.map(PathBuf::from),
timestamp: Some(chrono::Utc::now()),
};
let speculative = cog.speculative_retriever.predict_needed(&speculative_context);
let speculative = cog
.speculative_retriever
.predict_needed(&speculative_context);
Ok(serde_json::json!({
"predictions": predictions.iter().map(|p| serde_json::json!({

View file

@ -44,10 +44,7 @@ struct RecallArgs {
min_retention: Option<f64>,
}
pub async fn execute(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: RecallArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
@ -101,8 +98,8 @@ pub async fn execute(
#[cfg(test)]
mod tests {
use super::*;
use vestige_core::IngestInput;
use tempfile::TempDir;
use vestige_core::IngestInput;
/// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Storage>, TempDir) {
@ -266,7 +263,8 @@ mod tests {
#[tokio::test]
async fn test_recall_returns_matching_content() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Python is a dynamic programming language.").await;
let node_id =
ingest_test_content(&storage, "Python is a dynamic programming language.").await;
let args = serde_json::json!({ "query": "python" });
let result = execute(&storage, Some(args)).await;
@ -370,7 +368,12 @@ mod tests {
let schema_value = schema();
assert_eq!(schema_value["type"], "object");
assert!(schema_value["properties"]["query"].is_object());
assert!(schema_value["required"].as_array().unwrap().contains(&serde_json::json!("query")));
assert!(
schema_value["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("query"))
);
}
#[test]

View file

@ -8,7 +8,6 @@ use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use vestige_core::{IngestInput, Storage};
/// Input schema for restore tool
@ -51,10 +50,7 @@ struct MemoryBackup {
source: Option<String>,
}
pub async fn execute(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: RestoreArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
@ -71,25 +67,26 @@ pub async fn execute(
// Try parsing as wrapped format first (MCP response wrapper),
// then fall back to direct RecallResult
let memories: Vec<MemoryBackup> =
if let Ok(wrapper) = serde_json::from_str::<Vec<BackupWrapper>>(&backup_content) {
if let Some(first) = wrapper.first() {
let recall: RecallResult = serde_json::from_str(&first.text)
.map_err(|e| format!("Failed to parse backup contents: {}", e))?;
recall.results
} else {
return Err("Empty backup file".to_string());
}
} else if let Ok(recall) = serde_json::from_str::<RecallResult>(&backup_content) {
let memories: Vec<MemoryBackup> = if let Ok(wrapper) =
serde_json::from_str::<Vec<BackupWrapper>>(&backup_content)
{
if let Some(first) = wrapper.first() {
let recall: RecallResult = serde_json::from_str(&first.text)
.map_err(|e| format!("Failed to parse backup contents: {}", e))?;
recall.results
} else if let Ok(nodes) = serde_json::from_str::<Vec<MemoryBackup>>(&backup_content) {
nodes
} else {
return Err(
"Unrecognized backup format. Expected MCP wrapper, RecallResult, or array of memories."
.to_string(),
);
};
return Err("Empty backup file".to_string());
}
} else if let Ok(recall) = serde_json::from_str::<RecallResult>(&backup_content) {
recall.results
} else if let Ok(nodes) = serde_json::from_str::<Vec<MemoryBackup>>(&backup_content) {
nodes
} else {
return Err(
"Unrecognized backup format. Expected MCP wrapper, RecallResult, or array of memories."
.to_string(),
);
};
let total = memories.len();
if total == 0 {
@ -108,7 +105,10 @@ pub async fn execute(
for memory in &memories {
let input = IngestInput {
content: memory.content.clone(),
node_type: memory.node_type.clone().unwrap_or_else(|| "fact".to_string()),
node_type: memory
.node_type
.clone()
.unwrap_or_else(|| "fact".to_string()),
source: memory.source.clone(),
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
@ -157,10 +157,12 @@ mod tests {
let s = schema();
assert_eq!(s["type"], "object");
assert!(s["properties"]["path"].is_object());
assert!(s["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("path")));
assert!(
s["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("path"))
);
}
#[tokio::test]

View file

@ -6,7 +6,6 @@ use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use vestige_core::{Rating, Storage};
/// Input schema for mark_reviewed tool
@ -37,10 +36,7 @@ struct ReviewArgs {
rating: Option<i32>,
}
pub async fn execute(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: ReviewArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
@ -54,15 +50,18 @@ pub async fn execute(
return Err("Rating must be between 1 and 4".to_string());
}
let rating = Rating::from_i32(rating_value)
.ok_or_else(|| "Invalid rating value".to_string())?;
let rating =
Rating::from_i32(rating_value).ok_or_else(|| "Invalid rating value".to_string())?;
// Get node before review for comparison
let before = storage.get_node(&args.id).map_err(|e| e.to_string())?
let before = storage
.get_node(&args.id)
.map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", args.id))?;
let node = storage.mark_reviewed(&args.id, rating).map_err(|e| e.to_string())?;
let node = storage
.mark_reviewed(&args.id, rating)
.map_err(|e| e.to_string())?;
let rating_name = match rating {
Rating::Again => "Again",
@ -97,8 +96,8 @@ pub async fn execute(
#[cfg(test)]
mod tests {
use super::*;
use vestige_core::IngestInput;
use tempfile::TempDir;
use vestige_core::IngestInput;
/// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Storage>, TempDir) {
@ -438,7 +437,12 @@ mod tests {
let schema_value = schema();
assert_eq!(schema_value["type"], "object");
assert!(schema_value["properties"]["id"].is_object());
assert!(schema_value["required"].as_array().unwrap().contains(&serde_json::json!("id")));
assert!(
schema_value["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("id"))
);
}
#[test]

View file

@ -101,7 +101,6 @@ pub async fn execute_semantic(
return Err("Query cannot be empty".to_string());
}
// Check if embeddings are ready
if !storage.is_embedding_ready() {
return Ok(serde_json::json!({
@ -140,10 +139,7 @@ pub async fn execute_semantic(
}))
}
pub async fn execute_hybrid(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_hybrid(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: HybridSearchArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
@ -153,7 +149,6 @@ pub async fn execute_hybrid(
return Err("Query cannot be empty".to_string());
}
let results = storage
.hybrid_search(
&args.query,

View file

@ -191,15 +191,16 @@ pub async fn execute(
.hybrid_search_filtered(
&args.query,
keyword_first_limit,
1.0, // keyword_weight = 1.0 (keyword-only)
0.0, // semantic_weight = 0.0
1.0, // keyword_weight = 1.0 (keyword-only)
0.0, // semantic_weight = 0.0
args.include_types.as_deref(),
args.exclude_types.as_deref(),
)
.map_err(|e| e.to_string())?;
// Collect keyword-priority results (keyword_score >= threshold)
let mut keyword_priority_ids: std::collections::HashSet<String> = std::collections::HashSet::new();
let mut keyword_priority_ids: std::collections::HashSet<String> =
std::collections::HashSet::new();
let mut keyword_priority_results: Vec<vestige_core::SearchResult> = Vec::new();
for r in keyword_first_results {
if r.keyword_score.unwrap_or(0.0) >= keyword_priority_threshold
@ -214,7 +215,7 @@ pub async fn execute(
// STAGE 1: Hybrid search with Nx over-fetch for reranking pool
// ====================================================================
let overfetch_multiplier = match retrieval_mode {
"precise" => 1, // No overfetch — return exactly what's asked
"precise" => 1, // No overfetch — return exactly what's asked
"exhaustive" => 5, // Deep overfetch for maximum recall
_ => 3, // Balanced default
};
@ -251,7 +252,10 @@ pub async fn execute(
// Dedup: merge Stage 0 keyword-priority results into Stage 1 results
// ====================================================================
for kp in &keyword_priority_results {
if let Some(existing) = filtered_results.iter_mut().find(|r| r.node.id == kp.node.id) {
if let Some(existing) = filtered_results
.iter_mut()
.find(|r| r.node.id == kp.node.id)
{
// Preserve keyword_score from Stage 0 (keyword-only search is authoritative)
if kp.keyword_score.unwrap_or(0.0) > existing.keyword_score.unwrap_or(0.0) {
existing.keyword_score = kp.keyword_score;
@ -305,7 +309,10 @@ pub async fn execute(
let reranked_results: Vec<vestige_core::SearchResult> = if rerank_candidates.is_empty() {
Vec::new()
} else if let Ok(mut cog) = cognitive.try_lock() {
if let Ok(reranked) = cog.reranker.rerank(&args.query, rerank_candidates, Some(limit_usize)) {
if let Ok(reranked) =
cog.reranker
.rerank(&args.query, rerank_candidates, Some(limit_usize))
{
reranked.into_iter().map(|rr| rr.item).collect()
} else {
// Reranker failed — fall back to original order for non-bypass candidates
@ -343,8 +350,8 @@ pub async fn execute(
);
// Blend: 85% relevance + 15% temporal signal
let temporal_factor = recency * validity;
result.combined_score =
result.combined_score * 0.85 + (result.combined_score * temporal_factor as f32) * 0.15;
result.combined_score = result.combined_score * 0.85
+ (result.combined_score * temporal_factor as f32) * 0.15;
}
}
@ -368,9 +375,21 @@ pub async fn execute(
MemoryState::Unavailable
};
let adjusted = cog
let mut adjusted = cog
.accessibility_calc
.calculate(&lifecycle, result.combined_score as f64);
// v2.0.5: Active forgetting penalty (Anderson 2025 SIF).
// Each prior suppress call compounds a retrieval-score penalty,
// saturating at 80%. Applied AFTER accessibility so the penalty
// stacks on top of any passive FSRS decay.
if result.node.suppression_count > 0 {
let sys =
vestige_core::neuroscience::active_forgetting::ActiveForgettingSystem::new();
let penalty = sys.retrieval_penalty(result.node.suppression_count);
adjusted *= 1.0 - penalty;
}
result.combined_score = adjusted as f32;
}
}
@ -381,14 +400,16 @@ pub async fn execute(
if let Some(ref topics) = args.context_topics
&& !topics.is_empty()
{
let retrieval_ctx = EncodingContext::new()
.with_topical(TopicalContext::with_topics(topics.clone()));
let retrieval_ctx =
EncodingContext::new().with_topical(TopicalContext::with_topics(topics.clone()));
if let Ok(cog) = cognitive.try_lock() {
for result in &mut filtered_results {
// Build encoding context from memory's tags
let encoding_ctx = EncodingContext::new()
.with_topical(TopicalContext::with_topics(result.node.tags.clone()));
let context_score = cog.context_matcher.match_contexts(&encoding_ctx, &retrieval_ctx);
let context_score = cog
.context_matcher
.match_contexts(&encoding_ctx, &retrieval_ctx);
// Blend: context match boosts relevance up to +30%
result.combined_score *= 1.0 + (context_score as f32 * 0.3);
}
@ -403,7 +424,9 @@ pub async fn execute(
} else {
EncodingContext::new()
};
let reinstatement = cog.context_matcher.reinstate_context(&first.node.id, &current_ctx);
let reinstatement = cog
.context_matcher
.reinstate_context(&first.node.id, &current_ctx);
Some(serde_json::json!({
"memoryId": reinstatement.memory_id,
"temporalHint": reinstatement.temporal_hint,
@ -438,7 +461,10 @@ pub async fn execute(
if let Some(result) = cog.competition_mgr.run_competition(&candidates, 0.7) {
// Apply suppression: losers get penalized
for suppressed_id in &result.suppressed_ids {
if let Some(r) = filtered_results.iter_mut().find(|r| &r.node.id == suppressed_id) {
if let Some(r) = filtered_results
.iter_mut()
.find(|r| &r.node.id == suppressed_id)
{
r.combined_score *= 0.85; // 15% suppression penalty
suppressed_count += 1;
}
@ -503,7 +529,10 @@ pub async fn execute(
// ====================================================================
// Auto-strengthen on access (Testing Effect)
// ====================================================================
let ids: Vec<&str> = filtered_results.iter().map(|r| r.node.id.as_str()).collect();
let ids: Vec<&str> = filtered_results
.iter()
.map(|r| r.node.id.as_str())
.collect();
let _ = storage.strengthen_batch_on_access(&ids);
// Drop storage lock before acquiring cognitive for side effects
@ -525,9 +554,9 @@ pub async fn execute(
cog.speculative_retriever.record_access(
&result.node.id,
None, // file_context
Some(args.query.as_str()), // query_context
None, // was_helpful (unknown yet)
None, // file_context
Some(args.query.as_str()), // query_context
None, // was_helpful (unknown yet)
);
// 7C. Mark labile for reconsolidation window (5 min)
@ -580,7 +609,11 @@ pub async fn execute(
}
// Check learning mode via attention signal
let learning_mode = cognitive.try_lock().ok().map(|cog| cog.attention_signal.is_learning_mode()).unwrap_or(false);
let learning_mode = cognitive
.try_lock()
.ok()
.map(|cog| cog.attention_signal.is_learning_mode())
.unwrap_or(false);
let mut response = serde_json::json!({
"query": args.query,
@ -593,7 +626,9 @@ pub async fn execute(
// Helpful hint when no results found
if formatted.is_empty() {
response["hint"] = serde_json::json!("No memories found. Use smart_ingest to add memories, or try a broader query.");
response["hint"] = serde_json::json!(
"No memories found. Use smart_ingest to add memories, or try a broader query."
);
}
// Include associations if any were found
@ -1038,10 +1073,12 @@ mod tests {
let schema_value = schema();
assert_eq!(schema_value["type"], "object");
assert!(schema_value["properties"]["query"].is_object());
assert!(schema_value["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("query")));
assert!(
schema_value["required"]
.as_array()
.unwrap()
.contains(&serde_json::json!("query"))
);
}
#[test]
@ -1172,8 +1209,14 @@ mod tests {
// Summary should have content AND timestamps (v2.1: dates always visible)
assert!(first["content"].is_string());
assert!(first["id"].is_string());
assert!(first["createdAt"].is_string(), "summary must include createdAt");
assert!(first["updatedAt"].is_string(), "summary must include updatedAt");
assert!(
first["createdAt"].is_string(),
"summary must include createdAt"
);
assert!(
first["updatedAt"].is_string(),
"summary must include updatedAt"
);
}
}
@ -1199,7 +1242,10 @@ mod tests {
for i in 0..10 {
ingest_test_content(
&storage,
&format!("Budget test content number {} with some extra text to increase size.", i),
&format!(
"Budget test content number {} with some extra text to increase size.",
i
),
)
.await;
}

View file

@ -110,7 +110,9 @@ pub async fn execute(
let include_status = args.include_status.unwrap_or(true);
let include_intentions = args.include_intentions.unwrap_or(true);
let include_predictions = args.include_predictions.unwrap_or(true);
let queries = args.queries.unwrap_or_else(|| vec!["user preferences".to_string()]);
let queries = args
.queries
.unwrap_or_else(|| vec!["user preferences".to_string()]);
let mut context_parts: Vec<String> = Vec::new();
let mut expandable_ids: Vec<String> = Vec::new();
@ -275,34 +277,33 @@ pub async fn execute(
if include_predictions {
let cog = cognitive.lock().await;
let session_ctx = vestige_core::neuroscience::predictive_retrieval::SessionContext {
started_at: Utc::now(),
current_focus: args
.context
.as_ref()
.and_then(|c| c.topics.as_ref())
.and_then(|t| t.first())
.cloned(),
active_files: args
.context
.as_ref()
.and_then(|c| c.file.as_ref())
.map(|f| vec![f.clone()])
.unwrap_or_default(),
accessed_memories: Vec::new(),
recent_queries: Vec::new(),
detected_intent: None,
project_context: args
.context
.as_ref()
.and_then(|c| c.codebase.as_ref())
.map(|name| vestige_core::neuroscience::predictive_retrieval::ProjectContext {
name: name.to_string(),
path: String::new(),
technologies: Vec::new(),
primary_language: None,
}),
};
let session_ctx =
vestige_core::neuroscience::predictive_retrieval::SessionContext {
started_at: Utc::now(),
current_focus: args
.context
.as_ref()
.and_then(|c| c.topics.as_ref())
.and_then(|t| t.first())
.cloned(),
active_files: args
.context
.as_ref()
.and_then(|c| c.file.as_ref())
.map(|f| vec![f.clone()])
.unwrap_or_default(),
accessed_memories: Vec::new(),
recent_queries: Vec::new(),
detected_intent: None,
project_context: args.context.as_ref().and_then(|c| c.codebase.as_ref()).map(
|name| vestige_core::neuroscience::predictive_retrieval::ProjectContext {
name: name.to_string(),
path: String::new(),
technologies: Vec::new(),
primary_language: None,
},
),
};
let predictions = cog
.predictive_memory
@ -335,41 +336,45 @@ pub async fn execute(
// 5. Codebase patterns/decisions (if codebase specified)
// ====================================================================
if let Some(ref ctx) = args.context
&& let Some(ref codebase) = ctx.codebase {
let codebase_tag = format!("codebase:{}", codebase);
let mut cb_lines: Vec<String> = Vec::new();
&& let Some(ref codebase) = ctx.codebase
{
let codebase_tag = format!("codebase:{}", codebase);
let mut cb_lines: Vec<String> = Vec::new();
// Get patterns
if let Ok(patterns) = storage.get_nodes_by_type_and_tag("pattern", Some(&codebase_tag), 3) {
for p in &patterns {
let line = format!("- [pattern] {}", first_sentence(&p.content));
let line_len = line.len() + 1;
if char_count + line_len <= budget_chars {
cb_lines.push(line);
char_count += line_len;
}
// Get patterns
if let Ok(patterns) = storage.get_nodes_by_type_and_tag("pattern", Some(&codebase_tag), 3) {
for p in &patterns {
let line = format!("- [pattern] {}", first_sentence(&p.content));
let line_len = line.len() + 1;
if char_count + line_len <= budget_chars {
cb_lines.push(line);
char_count += line_len;
}
}
// Get decisions
if let Ok(decisions) =
storage.get_nodes_by_type_and_tag("decision", Some(&codebase_tag), 3)
{
for d in &decisions {
let line = format!("- [decision] {}", first_sentence(&d.content));
let line_len = line.len() + 1;
if char_count + line_len <= budget_chars {
cb_lines.push(line);
char_count += line_len;
}
}
}
if !cb_lines.is_empty() {
context_parts.push(format!("**Codebase ({}):**\n{}", codebase, cb_lines.join("\n")));
}
}
// Get decisions
if let Ok(decisions) = storage.get_nodes_by_type_and_tag("decision", Some(&codebase_tag), 3)
{
for d in &decisions {
let line = format!("- [decision] {}", first_sentence(&d.content));
let line_len = line.len() + 1;
if char_count + line_len <= budget_chars {
cb_lines.push(line);
char_count += line_len;
}
}
}
if !cb_lines.is_empty() {
context_parts.push(format!(
"**Codebase ({}):**\n{}",
codebase,
cb_lines.join("\n")
));
}
}
// ====================================================================
// 6. Assemble final response
// ====================================================================
@ -405,9 +410,10 @@ fn check_intention_triggered(
match trigger.trigger_type.as_deref() {
Some("time") => {
if let Some(ref at) = trigger.at
&& let Ok(trigger_time) = DateTime::parse_from_rfc3339(at) {
return trigger_time.with_timezone(&Utc) <= now;
}
&& let Ok(trigger_time) = DateTime::parse_from_rfc3339(at)
{
return trigger_time.with_timezone(&Utc) <= now;
}
if let Some(mins) = trigger.in_minutes {
let trigger_time = intention.created_at + Duration::minutes(mins);
return trigger_time <= now;
@ -420,22 +426,23 @@ fn check_intention_triggered(
&& current_cb
.to_lowercase()
.contains(&trigger_cb.to_lowercase())
{
return true;
}
{
return true;
}
// Check file pattern match
if let (Some(pattern), Some(file)) = (&trigger.file_pattern, &ctx.file)
&& file.contains(pattern.as_str()) {
return true;
}
&& file.contains(pattern.as_str())
{
return true;
}
// Check topic match
if let (Some(topic), Some(topics)) = (&trigger.topic, &ctx.topics)
&& topics
.iter()
.any(|t| t.to_lowercase().contains(&topic.to_lowercase()))
{
return true;
}
{
return true;
}
false
}
_ => false,
@ -537,7 +544,12 @@ mod tests {
#[tokio::test]
async fn test_with_queries() {
let (storage, _dir) = test_storage().await;
ingest_test_content(&storage, "Sam prefers Rust and TypeScript for all projects.", vec![]).await;
ingest_test_content(
&storage,
"Sam prefers Rust and TypeScript for all projects.",
vec![],
)
.await;
let args = serde_json::json!({
"queries": ["Sam preferences", "project context"]
@ -574,12 +586,16 @@ mod tests {
assert!(result.is_ok());
let value = result.unwrap();
let ctx = value["context"].as_str().unwrap();
assert!(value["context"].is_string());
// Context should be within budget (200 tokens * 4 = 800 chars + header overhead)
// The actual char count of context should be reasonable
let tokens_used = value["tokensUsed"].as_u64().unwrap();
// Allow some overhead for the header
assert!(tokens_used <= 300, "tokens_used {} should be near budget 200", tokens_used);
assert!(
tokens_used <= 300,
"tokens_used {} should be near budget 200",
tokens_used
);
}
#[tokio::test]
@ -649,7 +665,8 @@ mod tests {
let (storage, _dir) = test_storage().await;
// Ingest a pattern with codebase tag
let input = IngestInput {
content: "Code pattern: Use Arc<Mutex<>> for shared state in async contexts.".to_string(),
content: "Code pattern: Use Arc<Mutex<>> for shared state in async contexts."
.to_string(),
node_type: "pattern".to_string(),
source: None,
sentiment_score: 0.0,
@ -681,7 +698,10 @@ mod tests {
#[test]
fn test_first_sentence_period() {
assert_eq!(first_sentence("Hello world. More text here."), "Hello world.");
assert_eq!(
first_sentence("Hello world. More text here."),
"Hello world."
);
}
#[test]

View file

@ -22,7 +22,7 @@ use tokio::sync::Mutex;
use crate::cognitive::CognitiveEngine;
use vestige_core::{
ContentType, ImportanceContext, ImportanceEventType, ImportanceEvent, IngestInput, Storage,
ContentType, ImportanceContext, ImportanceEvent, ImportanceEventType, IngestInput, Storage,
};
/// Input schema for smart_ingest tool
@ -136,7 +136,9 @@ pub async fn execute(
}
// Single mode: content is required
let content = args.content.ok_or("Missing 'content' field. Provide 'content' for single mode or 'items' for batch mode.")?;
let content = args.content.ok_or(
"Missing 'content' field. Provide 'content' for single mode or 'items' for batch mode.",
)?;
// Validate content
if content.trim().is_empty() {
@ -156,7 +158,9 @@ pub async fn execute(
if let Ok(cog) = cognitive.try_lock() {
// 4A. Full 4-channel importance scoring
let context = ImportanceContext::current();
let importance = cog.importance_signals.compute_importance(&content, &context);
let importance = cog
.importance_signals
.compute_importance(&content, &context);
importance_composite = importance.composite;
// 4B. Intent detection → auto-tag
@ -201,7 +205,13 @@ pub async fn execute(
let has_embedding = node.has_embedding.unwrap_or(false);
// Post-ingest cognitive side effects
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
run_post_ingest(
cognitive,
&node_id,
&node_content,
&node_type,
importance_composite,
);
return Ok(serde_json::json!({
"success": true,
@ -225,7 +235,13 @@ pub async fn execute(
let has_embedding = result.node.has_embedding.unwrap_or(false);
// Post-ingest cognitive side effects
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
run_post_ingest(
cognitive,
&node_id,
&node_content,
&node_type,
importance_composite,
);
Ok(serde_json::json!({
"success": true,
@ -258,7 +274,13 @@ pub async fn execute(
let node_content = node.content.clone();
let node_type = node.node_type.clone();
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
run_post_ingest(
cognitive,
&node_id,
&node_content,
&node_type,
importance_composite,
);
Ok(serde_json::json!({
"success": true,
@ -331,7 +353,9 @@ async fn execute_batch(
if let Ok(cog) = cognitive.try_lock() {
let context = ImportanceContext::current();
let importance = cog.importance_signals.compute_importance(&item.content, &context);
let importance = cog
.importance_signals
.compute_importance(&item.content, &context);
importance_composite = importance.composite;
let intent_result = cog.intent_detector.detect_intent();
@ -373,7 +397,13 @@ async fn execute_batch(
let node_type = node.node_type.clone();
created += 1;
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
run_post_ingest(
cognitive,
&node_id,
&node_content,
&node_type,
importance_composite,
);
results.push(serde_json::json!({
"index": i,
@ -411,7 +441,13 @@ async fn execute_batch(
}
// Post-ingest cognitive side effects
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
run_post_ingest(
cognitive,
&node_id,
&node_content,
&node_type,
importance_composite,
);
results.push(serde_json::json!({
"index": i,
@ -443,7 +479,13 @@ async fn execute_batch(
let node_type = node.node_type.clone();
created += 1;
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
run_post_ingest(
cognitive,
&node_id,
&node_content,
&node_type,
importance_composite,
);
results.push(serde_json::json!({
"index": i,
@ -514,7 +556,8 @@ fn run_post_ingest(
);
// 4G. Cross-project pattern recording
cog.cross_project.record_project_memory(node_id, "default", None);
cog.cross_project
.record_project_memory(node_id, "default", None);
}
}
@ -576,8 +619,13 @@ mod tests {
let value = result.unwrap();
assert_eq!(value["success"], true);
assert_eq!(value["decision"], "create");
assert!(value["reason"].as_str().unwrap().contains("Forced") ||
value["reason"].as_str().unwrap().contains("Embeddings not available"));
assert!(
value["reason"].as_str().unwrap().contains("Forced")
|| value["reason"]
.as_str()
.unwrap()
.contains("Embeddings not available")
);
}
#[test]
@ -729,7 +777,12 @@ mod tests {
#[tokio::test]
async fn test_batch_empty_items_fails() {
let (storage, _dir) = test_storage().await;
let result = execute(&storage, &test_cognitive(), Some(serde_json::json!({ "items": [] }))).await;
let result = execute(
&storage,
&test_cognitive(),
Some(serde_json::json!({ "items": [] })),
)
.await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("empty"));
}
@ -738,14 +791,16 @@ mod tests {
async fn test_batch_ingest() {
let (storage, _dir) = test_storage().await;
let result = execute(
&storage, &test_cognitive(),
&storage,
&test_cognitive(),
Some(serde_json::json!({
"items": [
{ "content": "First batch item", "tags": ["test"] },
{ "content": "Second batch item", "tags": ["test"] }
]
})),
).await;
)
.await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["mode"], "batch");
@ -756,7 +811,8 @@ mod tests {
async fn test_batch_skips_empty_content() {
let (storage, _dir) = test_storage().await;
let result = execute(
&storage, &test_cognitive(),
&storage,
&test_cognitive(),
Some(serde_json::json!({
"items": [
{ "content": "Valid item" },
@ -764,7 +820,8 @@ mod tests {
{ "content": "Another valid item" }
]
})),
).await;
)
.await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["summary"]["skipped"], 1);
@ -784,7 +841,12 @@ mod tests {
let items: Vec<serde_json::Value> = (0..21)
.map(|i| serde_json::json!({ "content": format!("Item {}", i) }))
.collect();
let result = execute(&storage, &test_cognitive(), Some(serde_json::json!({ "items": items }))).await;
let result = execute(
&storage,
&test_cognitive(),
Some(serde_json::json!({ "items": items })),
)
.await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Maximum 20 items"));
}
@ -795,7 +857,12 @@ mod tests {
let items: Vec<serde_json::Value> = (0..20)
.map(|i| serde_json::json!({ "content": format!("Item {}", i) }))
.collect();
let result = execute(&storage, &test_cognitive(), Some(serde_json::json!({ "items": items }))).await;
let result = execute(
&storage,
&test_cognitive(),
Some(serde_json::json!({ "items": items })),
)
.await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["summary"]["total"], 20);
@ -805,14 +872,16 @@ mod tests {
async fn test_batch_skips_whitespace_only_content() {
let (storage, _dir) = test_storage().await;
let result = execute(
&storage, &test_cognitive(),
&storage,
&test_cognitive(),
Some(serde_json::json!({
"items": [
{ "content": " \t\n " },
{ "content": "Valid content" }
]
})),
).await;
)
.await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["summary"]["skipped"], 1);
@ -823,11 +892,13 @@ mod tests {
async fn test_batch_single_item_succeeds() {
let (storage, _dir) = test_storage().await;
let result = execute(
&storage, &test_cognitive(),
&storage,
&test_cognitive(),
Some(serde_json::json!({
"items": [{ "content": "Single item" }]
})),
).await;
)
.await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["summary"]["total"], 1);
@ -838,7 +909,8 @@ mod tests {
async fn test_batch_items_with_all_fields() {
let (storage, _dir) = test_storage().await;
let result = execute(
&storage, &test_cognitive(),
&storage,
&test_cognitive(),
Some(serde_json::json!({
"items": [{
"content": "Full fields item",
@ -847,7 +919,8 @@ mod tests {
"source": "test-suite"
}]
})),
).await;
)
.await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["summary"]["created"], 1);
@ -857,7 +930,8 @@ mod tests {
async fn test_batch_results_array_matches_items() {
let (storage, _dir) = test_storage().await;
let result = execute(
&storage, &test_cognitive(),
&storage,
&test_cognitive(),
Some(serde_json::json!({
"items": [
{ "content": "First" },
@ -865,7 +939,8 @@ mod tests {
{ "content": "Third" }
]
})),
).await;
)
.await;
let value = result.unwrap();
let results = value["results"].as_array().unwrap();
assert_eq!(results.len(), 3);
@ -879,14 +954,16 @@ mod tests {
async fn test_batch_success_true_when_only_skipped() {
let (storage, _dir) = test_storage().await;
let result = execute(
&storage, &test_cognitive(),
&storage,
&test_cognitive(),
Some(serde_json::json!({
"items": [
{ "content": "" },
{ "content": " " }
]
})),
).await;
)
.await;
let value = result.unwrap();
assert_eq!(value["success"], true); // skipped ≠ errors
assert_eq!(value["summary"]["errors"], 0);
@ -897,11 +974,13 @@ mod tests {
async fn test_batch_has_importance_scores() {
let (storage, _dir) = test_storage().await;
let result = execute(
&storage, &test_cognitive(),
&storage,
&test_cognitive(),
Some(serde_json::json!({
"items": [{ "content": "Important batch memory content" }]
})),
).await;
)
.await;
let value = result.unwrap();
let results = value["results"].as_array().unwrap();
assert!(results[0]["importanceScore"].is_number());
@ -912,7 +991,8 @@ mod tests {
let (storage, _dir) = test_storage().await;
// Three items with very similar content + global forceCreate
let result = execute(
&storage, &test_cognitive(),
&storage,
&test_cognitive(),
Some(serde_json::json!({
"forceCreate": true,
"items": [
@ -921,7 +1001,8 @@ mod tests {
{ "content": "Physics question about quantum mechanics and wave behavior" }
]
})),
).await;
)
.await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["mode"], "batch");
@ -941,7 +1022,8 @@ mod tests {
let (storage, _dir) = test_storage().await;
// Mix of forced and non-forced items
let result = execute(
&storage, &test_cognitive(),
&storage,
&test_cognitive(),
Some(serde_json::json!({
"items": [
{ "content": "Forced item one", "forceCreate": true },
@ -949,7 +1031,8 @@ mod tests {
{ "content": "Forced item three", "forceCreate": true }
]
})),
).await;
)
.await;
assert!(result.is_ok());
let value = result.unwrap();
let results = value["results"].as_array().unwrap();

View file

@ -57,15 +57,23 @@ pub async fn execute_health(storage: &Arc<Storage>) -> Result<Value, String> {
let mut warnings = Vec::new();
if stats.average_retention < 0.5 && stats.total_nodes > 0 {
warnings.push("Low average retention - consider running consolidation or reviewing memories".to_string());
warnings.push(
"Low average retention - consider running consolidation or reviewing memories"
.to_string(),
);
}
if stats.nodes_due_for_review > 10 {
warnings.push(format!("{} memories are due for review", stats.nodes_due_for_review));
warnings.push(format!(
"{} memories are due for review",
stats.nodes_due_for_review
));
}
if stats.total_nodes > 0 && stats.nodes_with_embeddings == 0 {
warnings.push("No embeddings generated - semantic search unavailable. Run consolidation.".to_string());
warnings.push(
"No embeddings generated - semantic search unavailable. Run consolidation.".to_string(),
);
}
let embedding_coverage = if stats.total_nodes > 0 {
@ -75,7 +83,10 @@ pub async fn execute_health(storage: &Arc<Storage>) -> Result<Value, String> {
};
if embedding_coverage < 50.0 && stats.total_nodes > 10 {
warnings.push(format!("Only {:.1}% of memories have embeddings", embedding_coverage));
warnings.push(format!(
"Only {:.1}% of memories have embeddings",
embedding_coverage
));
}
Ok(serde_json::json!({
@ -90,10 +101,7 @@ pub async fn execute_health(storage: &Arc<Storage>) -> Result<Value, String> {
}))
}
fn get_recommendations(
stats: &MemoryStats,
status: &str,
) -> Vec<String> {
fn get_recommendations(stats: &MemoryStats, status: &str) -> Vec<String> {
let mut recommendations = Vec::new();
if status == "critical" {
@ -105,11 +113,15 @@ fn get_recommendations(
}
if stats.nodes_with_embeddings < stats.total_nodes {
recommendations.push("Run 'run_consolidation' to generate embeddings for better semantic search.".to_string());
recommendations.push(
"Run 'run_consolidation' to generate embeddings for better semantic search."
.to_string(),
);
}
if stats.total_nodes > 100 && stats.average_retention < 0.7 {
recommendations.push("Consider running periodic consolidation to maintain memory health.".to_string());
recommendations
.push("Consider running periodic consolidation to maintain memory health.".to_string());
}
if recommendations.is_empty() {

View file

@ -0,0 +1,313 @@
//! `suppress` MCP Tool (v2.0.5) — Top-Down Active Forgetting
//!
//! Actively suppress a memory via top-down inhibitory control. Distinct from
//! `memory.delete` (which removes the row) and `memory.demote` (which is a
//! one-shot thumb-down). Each call compounds: suppression_count increments,
//! FSRS state is dealt a strong blow, and a background Rac1 cascade worker
//! (in the existing consolidation loop) will fade co-activated neighbors.
//!
//! Reversible within a 24-hour labile window via `reverse: true`.
//!
//! References:
//! - Anderson et al. (2025). Brain mechanisms underlying the inhibitory
//! control of thought. Nat Rev Neurosci. DOI 10.1038/s41583-025-00929-y
//! - Cervantes-Sandoval & Davis (2020). Rac1 Impairs Forgetting-Induced
//! Cellular Plasticity. Front Cell Neurosci. PMC7477079
use serde::{Deserialize, Serialize};
use serde_json::{Value, json};
use std::sync::Arc;
use vestige_core::Storage;
use vestige_core::neuroscience::active_forgetting::{ActiveForgettingSystem, DEFAULT_LABILE_HOURS};
/// Input schema for the `suppress` tool.
pub fn schema() -> Value {
json!({
"type": "object",
"description": "Actively suppress a memory via top-down inhibitory control (Anderson 2025 SIF + Davis Rac1). Distinct from delete: the memory persists but is inhibited from retrieval and actively decays. Each call compounds suppression strength. A background Rac1 worker cascades accelerated decay to co-activated neighbors over the next 72 hours. Reversible within 24 hours via reverse=true.",
"properties": {
"id": {
"type": "string",
"description": "Memory UUID to suppress (or reverse-suppress)"
},
"reason": {
"type": "string",
"description": "Optional free-form note explaining why this memory is being suppressed. Logged for audit."
},
"reverse": {
"type": "boolean",
"default": false,
"description": "If true, reverse a previous suppression. Only works within the 24-hour labile window."
}
},
"required": ["id"]
})
}
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
struct SuppressArgs {
id: String,
#[serde(default)]
reason: Option<String>,
#[serde(default)]
reverse: bool,
}
pub async fn execute(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: SuppressArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
};
if args.id.trim().is_empty() {
return Err("'id' must not be empty".to_string());
}
// Basic UUID sanity check — don't reject if missing, but warn
if uuid::Uuid::parse_str(&args.id).is_err() {
return Err(format!("Invalid memory ID format: {}", args.id));
}
let sys = ActiveForgettingSystem::new();
if args.reverse {
// Reverse path — only allowed within labile window.
match storage.reverse_suppression(&args.id, sys.labile_hours) {
Ok(node) => {
let still_suppressed = node.suppression_count > 0;
Ok(json!({
"success": true,
"action": "reverse",
"id": args.id,
"suppressionCount": node.suppression_count,
"stillSuppressed": still_suppressed,
"retentionStrength": node.retention_strength,
"retrievalStrength": node.retrieval_strength,
"stability": node.stability,
"message": if still_suppressed {
format!(
"Reversal applied. {} suppression(s) remain on this memory.",
node.suppression_count
)
} else {
"Suppression fully reversed. Memory is no longer inhibited.".to_string()
},
}))
}
Err(e) => Err(format!("Reverse failed: {}", e)),
}
} else {
// Forward path — suppress + log reason + tell the user what will happen.
let before_count = storage
.get_node(&args.id)
.map_err(|e| format!("Failed to load memory: {}", e))?
.map(|n| n.suppression_count)
.unwrap_or(0);
let node = storage
.suppress_memory(&args.id)
.map_err(|e| format!("Suppress failed: {}", e))?;
// Count how many neighbors will be cascaded over the coming 72h.
// We don't run the cascade synchronously — it happens in the
// background consolidation loop via `run_rac1_cascade_sweep`. But we
// can give the user an estimate.
let edges = storage
.get_connections_for_memory(&args.id)
.unwrap_or_default();
let estimated_cascade = edges.len().min(100);
let reversible_until = node
.suppressed_at
.map(|t| sys.reversible_until(t))
.unwrap_or_else(chrono::Utc::now);
let retrieval_penalty = sys.retrieval_penalty(node.suppression_count);
tracing::info!(
id = %args.id,
count = node.suppression_count,
reason = args.reason.as_deref().unwrap_or(""),
"Memory suppressed"
);
Ok(json!({
"success": true,
"action": "suppress",
"id": args.id,
"suppressionCount": node.suppression_count,
"priorCount": before_count,
"retrievalPenalty": retrieval_penalty,
"retentionStrength": node.retention_strength,
"retrievalStrength": node.retrieval_strength,
"stability": node.stability,
"estimatedCascadeNeighbors": estimated_cascade,
"reversibleUntil": reversible_until.to_rfc3339(),
"labileWindowHours": DEFAULT_LABILE_HOURS,
"reason": args.reason,
"message": format!(
"Actively forgetting. Suppression #{} applied. ~{} co-activated neighbors will fade over the next 72h via Rac1 cascade. Reversible for {}h.",
node.suppression_count, estimated_cascade, DEFAULT_LABILE_HOURS
),
"citation": "Anderson et al. 2025, Nat Rev Neurosci, DOI: 10.1038/s41583-025-00929-y"
}))
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
use vestige_core::IngestInput;
fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(storage), dir)
}
fn ingest(storage: &Storage, content: &str) -> String {
storage
.ingest(IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap()
.id
}
#[test]
fn test_schema_is_valid() {
let s = schema();
assert_eq!(s["type"], "object");
assert!(s["properties"]["id"].is_object());
assert!(s["properties"]["reverse"].is_object());
assert_eq!(s["required"][0], "id");
}
#[tokio::test]
async fn test_suppress_missing_args() {
let (storage, _dir) = test_storage();
let result = execute(&storage, None).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Missing arguments"));
}
#[tokio::test]
async fn test_suppress_invalid_uuid() {
let (storage, _dir) = test_storage();
let args = json!({"id": "not-a-uuid"});
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Invalid memory ID"));
}
#[tokio::test]
async fn test_suppress_increments_count() {
let (storage, _dir) = test_storage();
let id = ingest(&storage, "Jake is my roommate");
// First call
let r1 = execute(&storage, Some(json!({"id": id.clone()})))
.await
.unwrap();
assert_eq!(r1["suppressionCount"], 1);
assert_eq!(r1["priorCount"], 0);
// Second call — compounds
let r2 = execute(&storage, Some(json!({"id": id.clone()})))
.await
.unwrap();
assert_eq!(r2["suppressionCount"], 2);
assert_eq!(r2["priorCount"], 1);
}
#[tokio::test]
async fn test_suppress_applies_fsrs_penalty() {
let (storage, _dir) = test_storage();
let id = ingest(&storage, "Jake");
let before = storage.get_node(&id).unwrap().unwrap();
let result = execute(&storage, Some(json!({"id": id.clone()})))
.await
.unwrap();
// Stability should be heavily reduced
let after_stability = result["stability"].as_f64().unwrap();
assert!(after_stability < before.stability);
// Retention should be reduced
let after_retention = result["retentionStrength"].as_f64().unwrap();
assert!(after_retention < before.retention_strength);
}
#[tokio::test]
async fn test_suppress_is_not_delete() {
let (storage, _dir) = test_storage();
let id = ingest(&storage, "Jake");
execute(&storage, Some(json!({"id": id.clone()})))
.await
.unwrap();
// Memory must still be retrievable via get_node
let node = storage.get_node(&id).unwrap();
assert!(node.is_some(), "Suppressed memory must still exist");
assert_eq!(node.unwrap().suppression_count, 1);
}
#[tokio::test]
async fn test_reverse_within_window_decrements() {
let (storage, _dir) = test_storage();
let id = ingest(&storage, "Jake");
execute(&storage, Some(json!({"id": id.clone()})))
.await
.unwrap();
execute(&storage, Some(json!({"id": id.clone()})))
.await
.unwrap();
// Now reverse — count should drop from 2 to 1
let r = execute(&storage, Some(json!({"id": id.clone(), "reverse": true})))
.await
.unwrap();
assert_eq!(r["suppressionCount"], 1);
assert_eq!(r["stillSuppressed"], true);
// Reverse again — should go to 0
let r = execute(&storage, Some(json!({"id": id.clone(), "reverse": true})))
.await
.unwrap();
assert_eq!(r["suppressionCount"], 0);
assert_eq!(r["stillSuppressed"], false);
}
#[tokio::test]
async fn test_reverse_without_prior_suppression_fails() {
let (storage, _dir) = test_storage();
let id = ingest(&storage, "Fresh memory");
let result = execute(&storage, Some(json!({"id": id.clone(), "reverse": true}))).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("no active suppression"));
}
#[tokio::test]
async fn test_suppress_records_timestamp() {
let (storage, _dir) = test_storage();
let id = ingest(&storage, "Jake");
execute(&storage, Some(json!({"id": id.clone()})))
.await
.unwrap();
let node = storage.get_node(&id).unwrap().unwrap();
assert!(node.suppressed_at.is_some(), "suppressed_at must be set");
}
}

View file

@ -7,8 +7,8 @@ use serde_json::Value;
use std::sync::Arc;
use vestige_core::{
CaptureWindow, ImportanceEvent, ImportanceEventType,
SynapticTaggingConfig, SynapticTaggingSystem, Storage,
CaptureWindow, ImportanceEvent, ImportanceEventType, Storage, SynapticTaggingConfig,
SynapticTaggingSystem,
};
/// Input schema for trigger_importance tool
@ -69,27 +69,22 @@ pub fn stats_schema() -> Value {
}
/// Trigger an importance event to retroactively strengthen recent memories
pub async fn execute_trigger(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_trigger(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args = args.ok_or("Missing arguments")?;
let event_type_str = args["event_type"]
.as_str()
.ok_or("event_type is required")?;
let memory_id = args["memory_id"]
.as_str()
.ok_or("memory_id is required")?;
let memory_id = args["memory_id"].as_str().ok_or("memory_id is required")?;
let description = args["description"].as_str();
let hours_back = args["hours_back"].as_f64().unwrap_or(9.0);
let hours_forward = args["hours_forward"].as_f64().unwrap_or(2.0);
// Verify the trigger memory exists
let trigger_memory = storage.get_node(memory_id)
let trigger_memory = storage
.get_node(memory_id)
.map_err(|e| format!("Error: {}", e))?
.ok_or("Memory not found")?;
@ -121,8 +116,7 @@ pub async fn execute_trigger(
let mut stc = SynapticTaggingSystem::with_config(config);
// Get recent memories to tag
let recent = storage.get_all_nodes(100, 0)
.map_err(|e| e.to_string())?;
let recent = storage.get_all_nodes(100, 0).map_err(|e| e.to_string())?;
// Tag all recent memories
for mem in &recent {
@ -155,32 +149,30 @@ pub async fn execute_trigger(
}
/// Find memories with active synaptic tags
pub async fn execute_find(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute_find(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args = args.unwrap_or(serde_json::json!({}));
let min_strength = args["min_strength"].as_f64().unwrap_or(0.3);
let limit = args["limit"].as_i64().unwrap_or(20) as usize;
// Get memories with high retention (proxy for "tagged")
let memories = storage.get_all_nodes(200, 0)
.map_err(|e| e.to_string())?;
let memories = storage.get_all_nodes(200, 0).map_err(|e| e.to_string())?;
// Filter by retention strength (tagged memories have higher retention)
let tagged: Vec<Value> = memories.into_iter()
let tagged: Vec<Value> = memories
.into_iter()
.filter(|m| m.retention_strength >= min_strength)
.take(limit)
.map(|m| serde_json::json!({
"id": m.id,
"content": m.content,
"retentionStrength": m.retention_strength,
"storageStrength": m.storage_strength,
"lastAccessed": m.last_accessed.to_rfc3339(),
"tags": m.tags
}))
.map(|m| {
serde_json::json!({
"id": m.id,
"content": m.content,
"retentionStrength": m.retention_strength,
"storageStrength": m.storage_strength,
"lastAccessed": m.last_accessed.to_rfc3339(),
"tags": m.tags
})
})
.collect();
Ok(serde_json::json!({
@ -192,17 +184,22 @@ pub async fn execute_find(
}
/// Get synaptic tagging statistics
pub async fn execute_stats(
storage: &Arc<Storage>,
) -> Result<Value, String> {
let memories = storage.get_all_nodes(500, 0)
.map_err(|e| e.to_string())?;
pub async fn execute_stats(storage: &Arc<Storage>) -> Result<Value, String> {
let memories = storage.get_all_nodes(500, 0).map_err(|e| e.to_string())?;
let total = memories.len();
let high_retention = memories.iter().filter(|m| m.retention_strength >= 0.7).count();
let medium_retention = memories.iter().filter(|m| m.retention_strength >= 0.4 && m.retention_strength < 0.7).count();
let low_retention = memories.iter().filter(|m| m.retention_strength < 0.4).count();
let high_retention = memories
.iter()
.filter(|m| m.retention_strength >= 0.7)
.count();
let medium_retention = memories
.iter()
.filter(|m| m.retention_strength >= 0.4 && m.retention_strength < 0.7)
.count();
let low_retention = memories
.iter()
.filter(|m| m.retention_strength < 0.4)
.count();
let avg_retention = if total > 0 {
memories.iter().map(|m| m.retention_strength).sum::<f64>() / total as f64

View file

@ -9,7 +9,6 @@ use serde_json::Value;
use std::collections::BTreeMap;
use std::sync::Arc;
use vestige_core::Storage;
use super::search_unified::format_node;
@ -88,10 +87,7 @@ fn parse_datetime(s: &str) -> Result<DateTime<Utc>, String> {
}
/// Execute memory_timeline tool
pub async fn execute(
storage: &Arc<Storage>,
args: Option<Value>,
) -> Result<Value, String> {
pub async fn execute(storage: &Arc<Storage>, args: Option<Value>) -> Result<Value, String> {
let args: TimelineArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => TimelineArgs {
@ -130,7 +126,6 @@ pub async fn execute(
let limit = args.limit.unwrap_or(50).clamp(1, 200);
// Query memories in time range
let mut results = storage
.query_time_range(start, end, limit)
@ -195,17 +190,18 @@ mod tests {
}
async fn ingest_test_memory(storage: &Arc<Storage>, content: &str) {
storage.ingest(vestige_core::IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["timeline-test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap();
storage
.ingest(vestige_core::IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["timeline-test".to_string()],
valid_from: None,
valid_until: None,
})
.unwrap();
}
#[test]