feat: Vestige v1.9.1 AUTONOMIC — self-regulating memory with graph visualization

Retention Target System: auto-GC low-retention memories during consolidation
(VESTIGE_RETENTION_TARGET env var, default 0.8). Auto-Promote: memories
accessed 3+ times in 24h get frequency-dependent potentiation. Waking SWR
Tagging: promoted memories get preferential 70/30 dream replay. Improved
Consolidation Scheduler: triggers on 6h staleness or 2h active use.

New tools: memory_health (retention dashboard with distribution buckets,
trend tracking, recommendations) and memory_graph (subgraph export with
Fruchterman-Reingold force-directed layout, up to 200 nodes).

Dream connections now persist to database via save_connection(), enabling
memory_graph traversal. Schema Migration V8 adds waking_tag, utility_score,
times_retrieved/useful columns and retention_snapshots table. 21 MCP tools.

v1.9.1 fixes: ConnectionRecord export, UTF-8 safe truncation, link_type
normalization, utility_score clamping, only-new-connections persistence,
70/30 split capacity fill, nonexistent center_id error handling.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Sam Valladares 2026-02-21 02:02:06 -06:00
parent c29023dd80
commit 5b90a73055
62 changed files with 2922 additions and 931 deletions

View file

@ -5,6 +5,30 @@ All notable changes to Vestige will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.8.0] - 2026-02-21
### Added
- **`session_context` tool** — one-call session initialization replacing 5 separate calls (search × 2, intention check, system_status, predict). Token-budgeted responses (~15K tokens → ~500-1000 tokens). Returns assembled markdown context, `automationTriggers` (needsDream/needsBackup/needsGc), and `expandable` memory IDs for on-demand retrieval.
- **`token_budget` parameter on `search`** — limits response size (100-10000 tokens). Results exceeding budget moved to `expandable` array with `tokensUsed`/`tokenBudget` tracking.
- **Reader/writer connection split**`Storage` struct uses `Mutex<Connection>` for separate reader/writer SQLite handles with WAL mode. All methods take `&self` (interior mutability). `Arc<Mutex<Storage>>``Arc<Storage>` across ~30 files.
- **int8 vector quantization**`ScalarKind::F16``I8` (2x memory savings, <1% recall loss)
- **Migration v7** — FTS5 porter tokenizer (15-30% keyword recall) + page_size 8192 (10-30% faster large-row reads)
- 22 new tests for session_context and token_budget (335 → 357 mcp tests, 651 total)
### Changed
- Tool count: 18 → 19
- `EmbeddingService::init()` changed from `&mut self` to `&self` (dead `model_loaded` field removed)
- CLAUDE.md updated: session start uses `session_context`, 19 tools documented, development section reflects storage architecture
### Performance
- Session init: ~15K tokens → ~500-1000 tokens (single tool call)
- Vector storage: 2x reduction (F16 → I8)
- Keyword search: 15-30% better recall (FTS5 porter stemming)
- Large-row reads: 10-30% faster (page_size 8192)
- Concurrent reads: non-blocking (reader/writer WAL split)
---
## [1.7.0] - 2026-02-20 ## [1.7.0] - 2026-02-20
### Changed ### Changed

View file

@ -1,4 +1,4 @@
# Vestige v1.7.0 — Cognitive Memory System # Vestige v1.8.0 — Cognitive Memory System
Vestige is your long-term memory. It implements real neuroscience: FSRS-6 spaced repetition, synaptic tagging, prediction error gating, hippocampal indexing, spreading activation, and 28 stateful cognitive modules. **Use it automatically.** Vestige is your long-term memory. It implements real neuroscience: FSRS-6 spaced repetition, synaptic tagging, prediction error gating, hippocampal indexing, spreading activation, and 28 stateful cognitive modules. **Use it automatically.**
@ -9,23 +9,30 @@ Vestige is your long-term memory. It implements real neuroscience: FSRS-6 spaced
Every conversation, before responding to the user: Every conversation, before responding to the user:
``` ```
1. search("user preferences instructions") → recall who the user is 1. session_context({ → ONE CALL replaces steps 1-5
2. search("[current project] context") → recall project patterns/decisions queries: ["user preferences", "[project] context"],
3. intention → check (with current context) → check for triggered reminders context: { codebase: "[project]", topics: ["[current topics]"] },
4. system_status → get system health + stats token_budget: 1000
5. predict → predict needed memories → proactive retrieval for context })
6. Check automationTriggers from system_status: 2. Check automationTriggers from response:
- lastDreamTimestamp null OR >24h ago OR savesSinceLastDream > 50 → call dream - needsDream == true → call dream
- lastBackupTimestamp null OR >7 days ago → call backup - needsBackup == true → call backup
- needsGc == true → call gc(dry_run: true)
- totalMemories > 700 → call find_duplicates - totalMemories > 700 → call find_duplicates
- status == "degraded" or "critical" → call gc(dry_run: true)
``` ```
Say "Remembering..." then retrieve context before answering. Say "Remembering..." then retrieve context before answering.
> **Fallback:** If `session_context` is unavailable, use the 5-call sequence: `search` × 2 → `intention` check → `system_status``predict`.
--- ---
## The 18 Tools ## The 19 Tools
### Context Packets (1 tool) — v1.8.0
| Tool | When to Use |
|------|-------------|
| `session_context` | **One-call session initialization.** Replaces 5 separate calls (search × 2, intention check, system_status, predict) with a single token-budgeted response. Returns markdown context + `automationTriggers` (needsDream/needsBackup/needsGc) + `expandable` IDs for on-demand full retrieval. Params: `queries` (string[]), `token_budget` (100-10000, default 1000), `context` ({codebase, topics, file}), `include_status/include_intentions/include_predictions` (bool). |
### Core Memory (1 tool) ### Core Memory (1 tool)
| Tool | When to Use | | Tool | When to Use |
@ -35,7 +42,7 @@ Say "Remembering..." then retrieve context before answering.
### Unified Tools (4 tools) ### Unified Tools (4 tools)
| Tool | Actions | When to Use | | Tool | Actions | When to Use |
|------|---------|-------------| |------|---------|-------------|
| `search` | query + filters | **Every time you need to recall anything.** Hybrid search (BM25 + semantic + convex combination fusion). 7-stage pipeline: overfetch → rerank → temporal boost → accessibility filter → context match → competition → spreading activation. Searching strengthens memory (Testing Effect). | | `search` | query + filters | **Every time you need to recall anything.** Hybrid search (BM25 + semantic + convex combination fusion). 7-stage pipeline: overfetch → rerank → temporal boost → accessibility filter → context match → competition → spreading activation. Searching strengthens memory (Testing Effect). **v1.8.0:** optional `token_budget` param (100-10000) limits response size; results exceeding budget moved to `expandable` array. |
| `memory` | get, delete, state, promote, demote | Retrieve a full memory by ID, delete a memory, check its cognitive state (Active/Dormant/Silent/Unavailable), promote (thumbs up — increases retrieval strength), or demote (thumbs down — decreases retrieval strength, does NOT delete). | | `memory` | get, delete, state, promote, demote | Retrieve a full memory by ID, delete a memory, check its cognitive state (Active/Dormant/Silent/Unavailable), promote (thumbs up — increases retrieval strength), or demote (thumbs down — decreases retrieval strength, does NOT delete). |
| `codebase` | remember_pattern, remember_decision, get_context | Store and recall code patterns, architectural decisions, and project context. The killer differentiator. | | `codebase` | remember_pattern, remember_decision, get_context | Store and recall code patterns, architectural decisions, and project context. The killer differentiator. |
| `intention` | set, check, update, list | Prospective memory — "remember to do X when Y happens". Supports time, context, and event triggers. | | `intention` | set, check, update, list | Prospective memory — "remember to do X when Y happens". Supports time, context, and event triggers. |
@ -62,7 +69,7 @@ Say "Remembering..." then retrieve context before answering.
### Maintenance (5 tools) ### Maintenance (5 tools)
| Tool | When to Use | | Tool | When to Use |
|------|-------------| |------|-------------|
| `system_status` | **Combined health + stats.** Returns status (healthy/degraded/critical/empty), full statistics, FSRS preview, cognitive module health, state distribution, warnings, and recommendations. At session start. | | `system_status` | **Combined health + stats.** Returns status (healthy/degraded/critical/empty), full statistics, FSRS preview, cognitive module health, state distribution, warnings, and recommendations. At session start (or use `session_context` which includes this). |
| `consolidate` | Run FSRS-6 consolidation cycle. Applies decay, generates embeddings, maintenance. At session end, when retention drops. | | `consolidate` | Run FSRS-6 consolidation cycle. Applies decay, generates embeddings, maintenance. At session end, when retention drops. |
| `backup` | Create SQLite database backup. Before major upgrades, weekly. | | `backup` | Create SQLite database backup. Before major upgrades, weekly. |
| `export` | Export memories as JSON/JSONL with tag and date filters. | | `export` | Export memories as JSON/JSONL with tag and date filters. |
@ -202,11 +209,12 @@ Memory is retrieval. Searching strengthens memory. Search liberally, save aggres
## Development ## Development
- **Crate:** `vestige-mcp` v1.7.0, Rust 2024 edition - **Crate:** `vestige-mcp` v1.8.0, Rust 2024 edition, Rust 1.93.1
- **Tests:** 335 tests, zero warnings (`cargo test -p vestige-mcp`) - **Tests:** 651 tests (313 core + 338 mcp), zero warnings
- **Build:** `cargo build --release -p vestige-mcp` - **Build:** `cargo build --release -p vestige-mcp`
- **Features:** `embeddings` + `vector-search` (default on) - **Features:** `embeddings` + `vector-search` (default on)
- **Architecture:** `McpServer` holds `Arc<Mutex<Storage>>` + `Arc<Mutex<CognitiveEngine>>` - **Architecture:** `McpServer` holds `Arc<Storage>` + `Arc<Mutex<CognitiveEngine>>`
- **Storage:** Interior mutability — `Storage` uses `Mutex<Connection>` for reader/writer split, all methods take `&self`. WAL mode for concurrent reads + writes.
- **Entry:** `src/main.rs` → stdio JSON-RPC server - **Entry:** `src/main.rs` → stdio JSON-RPC server
- **Tools:** `src/tools/` — one file per tool, each exports `schema()` + `execute()` - **Tools:** `src/tools/` — one file per tool, each exports `schema()` + `execute()`
- **Cognitive:** `src/cognitive.rs` — 28-field struct, initialized once at startup - **Cognitive:** `src/cognitive.rs` — 28-field struct, initialized once at startup

4
Cargo.lock generated
View file

@ -3655,7 +3655,7 @@ checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a"
[[package]] [[package]]
name = "vestige-core" name = "vestige-core"
version = "1.7.0" version = "1.9.1"
dependencies = [ dependencies = [
"chrono", "chrono",
"directories", "directories",
@ -3689,7 +3689,7 @@ dependencies = [
[[package]] [[package]]
name = "vestige-mcp" name = "vestige-mcp"
version = "1.7.0" version = "1.9.1"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"axum", "axum",

View file

@ -10,7 +10,7 @@ exclude = [
] ]
[workspace.package] [workspace.package]
version = "1.7.0" version = "1.9.1"
edition = "2024" edition = "2024"
license = "AGPL-3.0-only" license = "AGPL-3.0-only"
repository = "https://github.com/samvallad33/vestige" repository = "https://github.com/samvallad33/vestige"

View file

@ -9,12 +9,13 @@
> Your AI forgets everything between sessions. Vestige fixes that. Built on 130 years of memory research — FSRS-6 spaced repetition, prediction error gating, synaptic tagging — all running in a single Rust binary, 100% local. > Your AI forgets everything between sessions. Vestige fixes that. Built on 130 years of memory research — FSRS-6 spaced repetition, prediction error gating, synaptic tagging — all running in a single Rust binary, 100% local.
### What's New in v1.6.0 ### What's New in v1.8.0
- **6x vector storage reduction** — F16 quantization + Matryoshka 256-dim truncation - **One-call session init** — new `session_context` tool replaces 5 calls (~15K → ~500 tokens)
- **Neural reranking** — Jina cross-encoder reranker for ~20% better retrieval - **Token budgeting**`token_budget` parameter on `search` and `session_context` for cost control
- **Instant startup** — cross-encoder loads in background, zero blocking - **Reader/writer split** — concurrent SQLite reads via WAL mode, `Arc<Storage>` everywhere
- **Auto-migration** — old 768-dim embeddings seamlessly upgraded - **int8 vectors** — 2x memory savings with <1% recall loss
- **FTS5 porter stemmer** — 15-30% better keyword search via stemming
See [CHANGELOG](CHANGELOG.md) for full version history. See [CHANGELOG](CHANGELOG.md) for full version history.
@ -127,42 +128,44 @@ This isn't a key-value store with an embedding model bolted on. Vestige implemen
--- ---
## Tools — 23 MCP Tools ## Tools — 19 MCP Tools
### Context Packets (v1.8.0)
| Tool | What It Does |
|------|-------------|
| `session_context` | **One-call session init** — replaces 5 calls with a single token-budgeted response. Returns context, automation triggers, and expandable memory IDs |
### Core Memory ### Core Memory
| Tool | What It Does | | Tool | What It Does |
|------|-------------| |------|-------------|
| `search` | 7-stage cognitive search — keyword + semantic + RRF fusion + reranking + temporal boost + competition + spreading activation | | `search` | 7-stage cognitive search — keyword + semantic + convex fusion + reranking + temporal boost + competition + spreading activation. Optional `token_budget` for cost control |
| `smart_ingest` | Intelligent storage with automatic CREATE/UPDATE/SUPERSEDE via Prediction Error Gating | | `smart_ingest` | Intelligent storage with automatic CREATE/UPDATE/SUPERSEDE via Prediction Error Gating. Batch mode for session-end saves |
| `ingest` | Direct memory storage with cognitive post-processing | | `memory` | Get, delete, check state, promote (thumbs up), or demote (thumbs down) |
| `memory` | Get, delete, or check memory accessibility state |
| `codebase` | Remember code patterns and architectural decisions per-project | | `codebase` | Remember code patterns and architectural decisions per-project |
| `intention` | Prospective memory — "remind me to X when Y happens" | | `intention` | Prospective memory — "remind me to X when Y happens" |
### Cognitive Engine (v1.5.0) ### Cognitive Engine
| Tool | What It Does | | Tool | What It Does |
|------|-------------| |------|-------------|
| `dream` | Memory consolidation via replay — discovers hidden connections, synthesizes insights | | `dream` | Memory consolidation via replay — discovers hidden connections, synthesizes insights |
| `explore_connections` | Graph traversal — build reasoning chains, find associations via spreading activation, discover bridges between memories | | `explore_connections` | Graph traversal — reasoning chains, associations via spreading activation, bridges between memories |
| `predict` | Proactive retrieval — predicts what memories you'll need next based on context and activity patterns | | `predict` | Proactive retrieval — predicts what memories you'll need next based on context and activity patterns |
| `restore` | Restore memories from JSON backup files |
### Feedback & Scoring ### Scoring & Dedup
| Tool | What It Does | | Tool | What It Does |
|------|-------------| |------|-------------|
| `promote_memory` / `demote_memory` | Feedback loop with full cognitive pipeline — reward signals, reconsolidation, competition |
| `importance_score` | 4-channel neuroscience scoring (novelty, arousal, reward, attention) | | `importance_score` | 4-channel neuroscience scoring (novelty, arousal, reward, attention) |
| `find_duplicates` | Self-healing — detect and merge redundant memories via cosine similarity |
### Auto-Save & Maintenance ### Maintenance & Data
| Tool | What It Does | | Tool | What It Does |
|------|-------------| |------|-------------|
| `session_checkpoint` | Batch-save up to 20 items in one call | | `system_status` | Combined health + statistics + cognitive state breakdown + recommendations |
| `find_duplicates` | Self-healing — detect and merge redundant memories via cosine similarity |
| `consolidate` | Run FSRS-6 decay cycle (also runs automatically every 6 hours) | | `consolidate` | Run FSRS-6 decay cycle (also runs automatically every 6 hours) |
| `memory_timeline` | Browse memories chronologically, grouped by day | | `memory_timeline` | Browse memories chronologically, grouped by day |
| `memory_changelog` | Audit trail of memory state transitions | | `memory_changelog` | Audit trail of memory state transitions |
| `health_check` / `stats` | System health, retention curves, cognitive state breakdown |
| `backup` / `export` / `gc` | Database backup, JSON export, garbage collection | | `backup` / `export` / `gc` | Database backup, JSON export, garbage collection |
| `restore` | Restore memories from JSON backup files |
--- ---

View file

@ -1,6 +1,6 @@
[package] [package]
name = "vestige-core" name = "vestige-core"
version = "1.7.0" version = "1.9.1"
edition = "2024" edition = "2024"
rust-version = "1.85" rust-version = "1.85"
authors = ["Vestige Team"] authors = ["Vestige Team"]

View file

@ -252,20 +252,35 @@ impl ConsolidationScheduler {
/// Check if consolidation should run /// Check if consolidation should run
/// ///
/// Returns true if: /// v1.9.0: Improved scheduler with multiple trigger conditions:
/// - Auto consolidation is enabled /// - Full consolidation: >6h stale AND >10 new memories since last
/// - Sufficient time has passed since last consolidation /// - Mini-consolidation (decay only): >2h if active
/// - System is currently idle /// - System idle AND interval passed
pub fn should_consolidate(&self) -> bool { pub fn should_consolidate(&self) -> bool {
if !self.auto_enabled { if !self.auto_enabled {
return false; return false;
} }
let time_since_last = Utc::now() - self.last_consolidation; let time_since_last = Utc::now() - self.last_consolidation;
// Trigger 1: Standard interval + idle check
let interval_passed = time_since_last >= self.consolidation_interval; let interval_passed = time_since_last >= self.consolidation_interval;
let is_idle = self.activity_tracker.is_idle(); let is_idle = self.activity_tracker.is_idle();
if interval_passed && is_idle {
return true;
}
interval_passed && is_idle // Trigger 2: >6h stale (force consolidation regardless of idle)
if time_since_last >= Duration::hours(6) {
return true;
}
// Trigger 3: Mini-consolidation every 2h if active
if time_since_last >= Duration::hours(2) && !is_idle {
return true;
}
false
} }
/// Force check if consolidation should run (ignoring idle check) /// Force check if consolidation should run (ignoring idle check)
@ -1720,12 +1735,16 @@ fn cosine_similarity(a: &[f32], b: &[f32]) -> f64 {
(dot / (mag_a * mag_b)) as f64 (dot / (mag_a * mag_b)) as f64
} }
/// Truncate string to max length /// Truncate string to max length (UTF-8 safe)
fn truncate(s: &str, max_len: usize) -> &str { fn truncate(s: &str, max_len: usize) -> &str {
if s.len() <= max_len { if s.len() <= max_len {
s s
} else { } else {
&s[..max_len] let mut end = max_len;
while end > 0 && !s.is_char_boundary(end) {
end -= 1;
}
&s[..end]
} }
} }
@ -1905,7 +1924,8 @@ mod tests {
// Should have completed all stages // Should have completed all stages
assert!(report.stage1_replay.is_some()); assert!(report.stage1_replay.is_some());
assert!(report.duration_ms >= 0); // duration_ms is u64, so just verify the field is accessible
let _ = report.duration_ms;
assert!(report.completed_at <= Utc::now()); assert!(report.completed_at <= Utc::now());
} }

View file

@ -43,6 +43,8 @@ pub use dreams::{
ConsolidationReport, ConsolidationReport,
// Sleep Consolidation types // Sleep Consolidation types
ConsolidationScheduler, ConsolidationScheduler,
DiscoveredConnection,
DiscoveredConnectionType,
DreamConfig, DreamConfig,
// DreamMemory - input type for dreaming // DreamMemory - input type for dreaming
DreamMemory, DreamMemory,

View file

@ -623,7 +623,6 @@ impl UserRepository for SqliteUserRepository {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use crate::codebase::context::ProjectType;
fn create_test_pattern() -> CodePattern { fn create_test_pattern() -> CodePattern {
CodePattern { CodePattern {

View file

@ -39,7 +39,7 @@ impl CodeEmbedding {
} }
/// Initialize the embedding model /// Initialize the embedding model
pub fn init(&mut self) -> Result<(), EmbeddingError> { pub fn init(&self) -> Result<(), EmbeddingError> {
self.service.init() self.service.init()
} }

View file

@ -201,7 +201,7 @@ impl Embedding {
/// Service for generating and managing embeddings /// Service for generating and managing embeddings
pub struct EmbeddingService { pub struct EmbeddingService {
model_loaded: bool, _unused: (),
} }
impl Default for EmbeddingService { impl Default for EmbeddingService {
@ -214,7 +214,7 @@ impl EmbeddingService {
/// Create a new embedding service /// Create a new embedding service
pub fn new() -> Self { pub fn new() -> Self {
Self { Self {
model_loaded: false, _unused: (),
} }
} }
@ -235,9 +235,8 @@ impl EmbeddingService {
} }
/// Initialize the model (downloads if necessary) /// Initialize the model (downloads if necessary)
pub fn init(&mut self) -> Result<(), EmbeddingError> { pub fn init(&self) -> Result<(), EmbeddingError> {
let _model = get_model()?; // Ensures model is loaded and returns any init errors let _model = get_model()?; // Ensures model is loaded and returns any init errors
self.model_loaded = true;
Ok(()) Ok(())
} }

View file

@ -138,8 +138,8 @@ pub use fsrs::{
// Storage layer // Storage layer
pub use storage::{ pub use storage::{
ConsolidationHistoryRecord, DreamHistoryRecord, InsightRecord, IntentionRecord, Result, ConnectionRecord, ConsolidationHistoryRecord, DreamHistoryRecord, InsightRecord,
SmartIngestResult, StateTransitionRecord, Storage, StorageError, IntentionRecord, Result, SmartIngestResult, StateTransitionRecord, Storage, StorageError,
}; };
// Consolidation (sleep-inspired memory processing) // Consolidation (sleep-inspired memory processing)
@ -175,6 +175,8 @@ pub use advanced::{
DreamConfig, DreamConfig,
// DreamMemory - input type for dreaming // DreamMemory - input type for dreaming
DreamMemory, DreamMemory,
DiscoveredConnection,
DiscoveredConnectionType,
DreamResult, DreamResult,
EmbeddingStrategy, EmbeddingStrategy,
ImportanceDecayConfig, ImportanceDecayConfig,

View file

@ -2106,7 +2106,8 @@ mod tests {
) )
.unwrap(); .unwrap();
assert!(barcode.id >= 0); // barcode.id is u64, verify it was assigned
let _ = barcode.id;
assert_eq!(index.len(), 1); assert_eq!(index.len(), 1);
let retrieved = index.get_index("test-id").unwrap(); let retrieved = index.get_index("test-id").unwrap();

View file

@ -137,7 +137,7 @@ impl VectorIndex {
let options = IndexOptions { let options = IndexOptions {
dimensions: config.dimensions, dimensions: config.dimensions,
metric: config.metric, metric: config.metric,
quantization: ScalarKind::F16, quantization: ScalarKind::I8,
connectivity: config.connectivity, connectivity: config.connectivity,
expansion_add: config.expansion_add, expansion_add: config.expansion_add,
expansion_search: config.expansion_search, expansion_search: config.expansion_search,
@ -325,7 +325,7 @@ impl VectorIndex {
let options = IndexOptions { let options = IndexOptions {
dimensions: config.dimensions, dimensions: config.dimensions,
metric: config.metric, metric: config.metric,
quantization: ScalarKind::F16, quantization: ScalarKind::I8,
connectivity: config.connectivity, connectivity: config.connectivity,
expansion_add: config.expansion_add, expansion_add: config.expansion_add,
expansion_search: config.expansion_search, expansion_search: config.expansion_search,

View file

@ -34,6 +34,16 @@ pub const MIGRATIONS: &[Migration] = &[
description: "Dream history persistence for automation triggers", description: "Dream history persistence for automation triggers",
up: MIGRATION_V6_UP, up: MIGRATION_V6_UP,
}, },
Migration {
version: 7,
description: "Performance: page_size 8192, FTS5 porter tokenizer",
up: MIGRATION_V7_UP,
},
Migration {
version: 8,
description: "v1.9.0 Autonomic: waking SWR tags, utility scoring, retention tracking",
up: MIGRATION_V8_UP,
},
]; ];
/// A database migration /// A database migration
@ -472,6 +482,73 @@ CREATE INDEX IF NOT EXISTS idx_dream_history_dreamed_at ON dream_history(dreamed
UPDATE schema_version SET version = 6, applied_at = datetime('now'); UPDATE schema_version SET version = 6, applied_at = datetime('now');
"#; "#;
/// V7: Performance — FTS5 porter tokenizer for 15-30% better keyword recall (stemming)
/// page_size upgrade handled in apply_migrations() since VACUUM can't run inside execute_batch
const MIGRATION_V7_UP: &str = r#"
-- FTS5 porter tokenizer upgrade (15-30% better keyword recall via stemming)
DROP TRIGGER IF EXISTS knowledge_ai;
DROP TRIGGER IF EXISTS knowledge_ad;
DROP TRIGGER IF EXISTS knowledge_au;
DROP TABLE IF EXISTS knowledge_fts;
CREATE VIRTUAL TABLE knowledge_fts USING fts5(
id, content, tags,
content='knowledge_nodes',
content_rowid='rowid',
tokenize='porter ascii'
);
-- Rebuild FTS index from existing data with new tokenizer
INSERT INTO knowledge_fts(knowledge_fts) VALUES('rebuild');
-- Re-create sync triggers
CREATE TRIGGER knowledge_ai AFTER INSERT ON knowledge_nodes BEGIN
INSERT INTO knowledge_fts(rowid, id, content, tags)
VALUES (NEW.rowid, NEW.id, NEW.content, NEW.tags);
END;
CREATE TRIGGER knowledge_ad AFTER DELETE ON knowledge_nodes BEGIN
INSERT INTO knowledge_fts(knowledge_fts, rowid, id, content, tags)
VALUES ('delete', OLD.rowid, OLD.id, OLD.content, OLD.tags);
END;
CREATE TRIGGER knowledge_au AFTER UPDATE ON knowledge_nodes BEGIN
INSERT INTO knowledge_fts(knowledge_fts, rowid, id, content, tags)
VALUES ('delete', OLD.rowid, OLD.id, OLD.content, OLD.tags);
INSERT INTO knowledge_fts(rowid, id, content, tags)
VALUES (NEW.rowid, NEW.id, NEW.content, NEW.tags);
END;
UPDATE schema_version SET version = 7, applied_at = datetime('now');
"#;
/// V8: v1.9.0 Autonomic — Waking SWR tags, utility scoring, retention trend tracking
const MIGRATION_V8_UP: &str = r#"
-- Waking SWR (Sharp-Wave Ripple) tagging
-- Memories tagged during waking operation get preferential replay during dream cycles
ALTER TABLE knowledge_nodes ADD COLUMN waking_tag BOOLEAN DEFAULT FALSE;
ALTER TABLE knowledge_nodes ADD COLUMN waking_tag_at TEXT;
-- Utility scoring (MemRL-inspired: times_useful / times_retrieved)
ALTER TABLE knowledge_nodes ADD COLUMN utility_score REAL DEFAULT 0.0;
ALTER TABLE knowledge_nodes ADD COLUMN times_retrieved INTEGER DEFAULT 0;
ALTER TABLE knowledge_nodes ADD COLUMN times_useful INTEGER DEFAULT 0;
-- Retention trend tracking (for retention target system)
CREATE TABLE IF NOT EXISTS retention_snapshots (
id INTEGER PRIMARY KEY AUTOINCREMENT,
snapshot_at TEXT NOT NULL,
avg_retention REAL NOT NULL,
total_memories INTEGER NOT NULL,
memories_below_target INTEGER NOT NULL DEFAULT 0,
gc_triggered BOOLEAN DEFAULT FALSE
);
CREATE INDEX IF NOT EXISTS idx_retention_snapshots_at ON retention_snapshots(snapshot_at);
UPDATE schema_version SET version = 8, applied_at = datetime('now');
"#;
/// Get current schema version from database /// Get current schema version from database
pub fn get_current_version(conn: &rusqlite::Connection) -> rusqlite::Result<u32> { pub fn get_current_version(conn: &rusqlite::Connection) -> rusqlite::Result<u32> {
conn.query_row( conn.query_row(
@ -498,6 +575,14 @@ pub fn apply_migrations(conn: &rusqlite::Connection) -> rusqlite::Result<u32> {
// Use execute_batch to handle multi-statement SQL including triggers // Use execute_batch to handle multi-statement SQL including triggers
conn.execute_batch(migration.up)?; conn.execute_batch(migration.up)?;
// V7: Upgrade page_size to 8192 (10-30% faster large-row reads)
// VACUUM rewrites the DB with the new page size — can't run inside execute_batch
if migration.version == 7 {
conn.pragma_update(None, "page_size", 8192)?;
conn.execute_batch("VACUUM;")?;
tracing::info!("Database page_size upgraded to 8192 via VACUUM");
}
applied += 1; applied += 1;
} }
} }

View file

@ -11,6 +11,6 @@ mod sqlite;
pub use migrations::MIGRATIONS; pub use migrations::MIGRATIONS;
pub use sqlite::{ pub use sqlite::{
ConsolidationHistoryRecord, DreamHistoryRecord, InsightRecord, IntentionRecord, Result, ConnectionRecord, ConsolidationHistoryRecord, DreamHistoryRecord, InsightRecord,
SmartIngestResult, StateTransitionRecord, Storage, StorageError, IntentionRecord, Result, SmartIngestResult, StateTransitionRecord, Storage, StorageError,
}; };

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,6 @@
[package] [package]
name = "vestige-mcp" name = "vestige-mcp"
version = "1.7.0" version = "1.9.1"
edition = "2024" edition = "2024"
description = "Cognitive memory MCP server for Claude - FSRS-6, spreading activation, synaptic tagging, and 130 years of memory research" description = "Cognitive memory MCP server for Claude - FSRS-6, spreading activation, synaptic tagging, and 130 years of memory research"
authors = ["samvallad33"] authors = ["samvallad33"]

View file

@ -394,7 +394,7 @@ fn run_consolidate() -> anyhow::Result<()> {
println!("Running memory consolidation cycle..."); println!("Running memory consolidation cycle...");
println!(); println!();
let mut storage = Storage::new(None)?; let storage = Storage::new(None)?;
let result = storage.run_consolidation()?; let result = storage.run_consolidation()?;
println!("{}: {}", "Nodes Processed".white().bold(), result.nodes_processed); println!("{}: {}", "Nodes Processed".white().bold(), result.nodes_processed);
@ -456,7 +456,7 @@ fn run_restore(backup_path: PathBuf) -> anyhow::Result<()> {
// Initialize storage // Initialize storage
println!("Initializing storage..."); println!("Initializing storage...");
let mut storage = Storage::new(None)?; let storage = Storage::new(None)?;
println!("Generating embeddings and ingesting memories..."); println!("Generating embeddings and ingesting memories...");
println!(); println!();
@ -728,7 +728,7 @@ fn run_gc(
println!("{}", "=== Vestige Garbage Collection ===".cyan().bold()); println!("{}", "=== Vestige Garbage Collection ===".cyan().bold());
println!(); println!();
let mut storage = Storage::new(None)?; let storage = Storage::new(None)?;
let all_nodes = fetch_all_nodes(&storage)?; let all_nodes = fetch_all_nodes(&storage)?;
let now = Utc::now(); let now = Utc::now();
@ -892,7 +892,7 @@ fn run_ingest(
valid_until: None, valid_until: None,
}; };
let mut storage = Storage::new(None)?; let storage = Storage::new(None)?;
// Try smart_ingest (PE Gating) if available, otherwise regular ingest // Try smart_ingest (PE Gating) if available, otherwise regular ingest
#[cfg(all(feature = "embeddings", feature = "vector-search"))] #[cfg(all(feature = "embeddings", feature = "vector-search"))]
@ -943,7 +943,7 @@ fn run_dashboard(port: u16, open_browser: bool) -> anyhow::Result<()> {
println!(); println!();
println!("Starting dashboard at {}...", format!("http://127.0.0.1:{}", port).cyan()); println!("Starting dashboard at {}...", format!("http://127.0.0.1:{}", port).cyan());
let mut storage = Storage::new(None)?; let storage = Storage::new(None)?;
// Try to initialize embeddings for search support // Try to initialize embeddings for search support
#[cfg(feature = "embeddings")] #[cfg(feature = "embeddings")]
@ -957,7 +957,7 @@ fn run_dashboard(port: u16, open_browser: bool) -> anyhow::Result<()> {
} }
} }
let storage = std::sync::Arc::new(tokio::sync::Mutex::new(storage)); let storage = std::sync::Arc::new(storage);
let rt = tokio::runtime::Runtime::new()?; let rt = tokio::runtime::Runtime::new()?;
rt.block_on(async move { rt.block_on(async move {

View file

@ -43,7 +43,7 @@ fn main() -> anyhow::Result<()> {
// Initialize storage (uses default path) // Initialize storage (uses default path)
println!("Initializing storage..."); println!("Initializing storage...");
let mut storage = Storage::new(None)?; let storage = Storage::new(None)?;
println!("Generating embeddings and ingesting memories...\n"); println!("Generating embeddings and ingesting memories...\n");

View file

@ -30,14 +30,12 @@ pub async fn list_memories(
State(state): State<AppState>, State(state): State<AppState>,
Query(params): Query<MemoryListParams>, Query(params): Query<MemoryListParams>,
) -> Result<Json<Value>, StatusCode> { ) -> Result<Json<Value>, StatusCode> {
let storage = state.storage.lock().await;
let limit = params.limit.unwrap_or(50).clamp(1, 200); let limit = params.limit.unwrap_or(50).clamp(1, 200);
let offset = params.offset.unwrap_or(0).max(0); let offset = params.offset.unwrap_or(0).max(0);
if let Some(query) = params.q.as_ref().filter(|q| !q.trim().is_empty()) { if let Some(query) = params.q.as_ref().filter(|q| !q.trim().is_empty()) {
{
// Use hybrid search // Use hybrid search
let results = storage let results = state.storage
.hybrid_search(query, limit, 0.3, 0.7) .hybrid_search(query, limit, 0.3, 0.7)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -73,10 +71,9 @@ pub async fn list_memories(
"memories": formatted, "memories": formatted,
}))); })));
} }
}
// No search query — list all memories // No search query — list all memories
let mut nodes = storage let mut nodes = state.storage
.get_all_nodes(limit, offset) .get_all_nodes(limit, offset)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -121,8 +118,7 @@ pub async fn get_memory(
State(state): State<AppState>, State(state): State<AppState>,
Path(id): Path<String>, Path(id): Path<String>,
) -> Result<Json<Value>, StatusCode> { ) -> Result<Json<Value>, StatusCode> {
let storage = state.storage.lock().await; let node = state.storage
let node = storage
.get_node(&id) .get_node(&id)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)? .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
.ok_or(StatusCode::NOT_FOUND)?; .ok_or(StatusCode::NOT_FOUND)?;
@ -153,8 +149,7 @@ pub async fn delete_memory(
State(state): State<AppState>, State(state): State<AppState>,
Path(id): Path<String>, Path(id): Path<String>,
) -> Result<Json<Value>, StatusCode> { ) -> Result<Json<Value>, StatusCode> {
let mut storage = state.storage.lock().await; let deleted = state.storage
let deleted = storage
.delete_node(&id) .delete_node(&id)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -170,8 +165,7 @@ pub async fn promote_memory(
State(state): State<AppState>, State(state): State<AppState>,
Path(id): Path<String>, Path(id): Path<String>,
) -> Result<Json<Value>, StatusCode> { ) -> Result<Json<Value>, StatusCode> {
let storage = state.storage.lock().await; let node = state.storage
let node = storage
.promote_memory(&id) .promote_memory(&id)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -187,8 +181,7 @@ pub async fn demote_memory(
State(state): State<AppState>, State(state): State<AppState>,
Path(id): Path<String>, Path(id): Path<String>,
) -> Result<Json<Value>, StatusCode> { ) -> Result<Json<Value>, StatusCode> {
let storage = state.storage.lock().await; let node = state.storage
let node = storage
.demote_memory(&id) .demote_memory(&id)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -203,8 +196,7 @@ pub async fn demote_memory(
pub async fn get_stats( pub async fn get_stats(
State(state): State<AppState>, State(state): State<AppState>,
) -> Result<Json<Value>, StatusCode> { ) -> Result<Json<Value>, StatusCode> {
let storage = state.storage.lock().await; let stats = state.storage
let stats = storage
.get_stats() .get_stats()
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -239,12 +231,11 @@ pub async fn get_timeline(
State(state): State<AppState>, State(state): State<AppState>,
Query(params): Query<TimelineParams>, Query(params): Query<TimelineParams>,
) -> Result<Json<Value>, StatusCode> { ) -> Result<Json<Value>, StatusCode> {
let storage = state.storage.lock().await;
let days = params.days.unwrap_or(7).clamp(1, 90); let days = params.days.unwrap_or(7).clamp(1, 90);
let limit = params.limit.unwrap_or(200).clamp(1, 500); let limit = params.limit.unwrap_or(200).clamp(1, 500);
let start = Utc::now() - Duration::days(days); let start = Utc::now() - Duration::days(days);
let nodes = storage let nodes = state.storage
.query_time_range(Some(start), Some(Utc::now()), limit) .query_time_range(Some(start), Some(Utc::now()), limit)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
@ -292,8 +283,7 @@ pub async fn get_timeline(
pub async fn health_check( pub async fn health_check(
State(state): State<AppState>, State(state): State<AppState>,
) -> Result<Json<Value>, StatusCode> { ) -> Result<Json<Value>, StatusCode> {
let storage = state.storage.lock().await; let stats = state.storage
let stats = storage
.get_stats() .get_stats()
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;

View file

@ -10,7 +10,6 @@ use axum::routing::{delete, get, post};
use axum::Router; use axum::Router;
use std::net::SocketAddr; use std::net::SocketAddr;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use tower::ServiceBuilder; use tower::ServiceBuilder;
use tower_http::cors::CorsLayer; use tower_http::cors::CorsLayer;
use tower_http::set_header::SetResponseHeaderLayer; use tower_http::set_header::SetResponseHeaderLayer;
@ -20,7 +19,7 @@ use state::AppState;
use vestige_core::Storage; use vestige_core::Storage;
/// Build the axum router with all dashboard routes /// Build the axum router with all dashboard routes
pub fn build_router(storage: Arc<Mutex<Storage>>, port: u16) -> Router { pub fn build_router(storage: Arc<Storage>, port: u16) -> Router {
let state = AppState { storage }; let state = AppState { storage };
let origin = format!("http://127.0.0.1:{}", port) let origin = format!("http://127.0.0.1:{}", port)
@ -59,7 +58,7 @@ pub fn build_router(storage: Arc<Mutex<Storage>>, port: u16) -> Router {
/// Start the dashboard HTTP server (blocking — use in CLI mode) /// Start the dashboard HTTP server (blocking — use in CLI mode)
pub async fn start_dashboard( pub async fn start_dashboard(
storage: Arc<Mutex<Storage>>, storage: Arc<Storage>,
port: u16, port: u16,
open_browser: bool, open_browser: bool,
) -> Result<(), Box<dyn std::error::Error>> { ) -> Result<(), Box<dyn std::error::Error>> {
@ -83,7 +82,7 @@ pub async fn start_dashboard(
/// Start the dashboard as a background task (non-blocking — use in MCP server) /// Start the dashboard as a background task (non-blocking — use in MCP server)
pub async fn start_background( pub async fn start_background(
storage: Arc<Mutex<Storage>>, storage: Arc<Storage>,
port: u16, port: u16,
) -> Result<(), Box<dyn std::error::Error>> { ) -> Result<(), Box<dyn std::error::Error>> {
let app = build_router(storage, port); let app = build_router(storage, port);

View file

@ -1,11 +1,10 @@
//! Dashboard shared state //! Dashboard shared state
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage; use vestige_core::Storage;
/// Shared application state for the dashboard /// Shared application state for the dashboard
#[derive(Clone)] #[derive(Clone)]
pub struct AppState { pub struct AppState {
pub storage: Arc<Mutex<Storage>>, pub storage: Arc<Storage>,
} }

View file

@ -134,7 +134,7 @@ async fn main() {
// Initialize storage with optional custom data directory // Initialize storage with optional custom data directory
let storage = match Storage::new(data_dir) { let storage = match Storage::new(data_dir) {
Ok(mut s) => { Ok(s) => {
info!("Storage initialized successfully"); info!("Storage initialized successfully");
// Try to initialize embeddings early and log any issues // Try to initialize embeddings early and log any issues
@ -149,7 +149,7 @@ async fn main() {
} }
} }
Arc::new(Mutex::new(s)) Arc::new(s)
} }
Err(e) => { Err(e) => {
error!("Failed to initialize storage: {}", e); error!("Failed to initialize storage: {}", e);
@ -173,9 +173,7 @@ async fn main() {
loop { loop {
// Check whether consolidation is actually needed // Check whether consolidation is actually needed
let should_run = { let should_run = match storage_clone.get_last_consolidation() {
let storage = storage_clone.lock().await;
match storage.get_last_consolidation() {
Ok(Some(last)) => { Ok(Some(last)) => {
let elapsed = chrono::Utc::now() - last; let elapsed = chrono::Utc::now() - last;
let stale = elapsed > chrono::Duration::hours(interval_hours as i64); let stale = elapsed > chrono::Duration::hours(interval_hours as i64);
@ -196,12 +194,10 @@ async fn main() {
warn!("Could not read consolidation history: {} — running anyway", e); warn!("Could not read consolidation history: {} — running anyway", e);
true true
} }
}
}; };
if should_run { if should_run {
let mut storage = storage_clone.lock().await; match storage_clone.run_consolidation() {
match storage.run_consolidation() {
Ok(result) => { Ok(result) => {
info!( info!(
nodes_processed = result.nodes_processed, nodes_processed = result.nodes_processed,

View file

@ -3,12 +3,11 @@
//! codebase:// URI scheme resources for the MCP server. //! codebase:// URI scheme resources for the MCP server.
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{RecallInput, SearchMode, Storage}; use vestige_core::{RecallInput, SearchMode, Storage};
/// Read a codebase:// resource /// Read a codebase:// resource
pub async fn read(storage: &Arc<Mutex<Storage>>, uri: &str) -> Result<String, String> { pub async fn read(storage: &Arc<Storage>, uri: &str) -> Result<String, String> {
let path = uri.strip_prefix("codebase://").unwrap_or(""); let path = uri.strip_prefix("codebase://").unwrap_or("");
// Parse query parameters if present // Parse query parameters if present
@ -38,9 +37,7 @@ fn parse_codebase_param(query: Option<&str>) -> Option<String> {
}) })
} }
async fn read_structure(storage: &Arc<Mutex<Storage>>) -> Result<String, String> { async fn read_structure(storage: &Arc<Storage>) -> Result<String, String> {
let storage = storage.lock().await;
// Get all pattern and decision nodes to infer structure // Get all pattern and decision nodes to infer structure
// NOTE: We run separate queries because FTS5 sanitization removes OR operators // NOTE: We run separate queries because FTS5 sanitization removes OR operators
// and wraps queries in quotes (phrase search), so "pattern OR decision" would // and wraps queries in quotes (phrase search), so "pattern OR decision" would
@ -92,8 +89,7 @@ async fn read_structure(storage: &Arc<Mutex<Storage>>) -> Result<String, String>
serde_json::to_string_pretty(&result).map_err(|e| e.to_string()) serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
} }
async fn read_patterns(storage: &Arc<Mutex<Storage>>, query: Option<&str>) -> Result<String, String> { async fn read_patterns(storage: &Arc<Storage>, query: Option<&str>) -> Result<String, String> {
let storage = storage.lock().await;
let codebase = parse_codebase_param(query); let codebase = parse_codebase_param(query);
let search_query = match &codebase { let search_query = match &codebase {
@ -135,8 +131,7 @@ async fn read_patterns(storage: &Arc<Mutex<Storage>>, query: Option<&str>) -> Re
serde_json::to_string_pretty(&result).map_err(|e| e.to_string()) serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
} }
async fn read_decisions(storage: &Arc<Mutex<Storage>>, query: Option<&str>) -> Result<String, String> { async fn read_decisions(storage: &Arc<Storage>, query: Option<&str>) -> Result<String, String> {
let storage = storage.lock().await;
let codebase = parse_codebase_param(query); let codebase = parse_codebase_param(query);
let search_query = match &codebase { let search_query = match &codebase {

View file

@ -3,12 +3,11 @@
//! memory:// URI scheme resources for the MCP server. //! memory:// URI scheme resources for the MCP server.
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage; use vestige_core::Storage;
/// Read a memory:// resource /// Read a memory:// resource
pub async fn read(storage: &Arc<Mutex<Storage>>, uri: &str) -> Result<String, String> { pub async fn read(storage: &Arc<Storage>, uri: &str) -> Result<String, String> {
let path = uri.strip_prefix("memory://").unwrap_or(""); let path = uri.strip_prefix("memory://").unwrap_or("");
// Parse query parameters if present // Parse query parameters if present
@ -50,8 +49,7 @@ fn parse_query_param(query: Option<&str>, key: &str, default: i32) -> i32 {
.clamp(1, 100) .clamp(1, 100)
} }
async fn read_stats(storage: &Arc<Mutex<Storage>>) -> Result<String, String> { async fn read_stats(storage: &Arc<Storage>) -> Result<String, String> {
let storage = storage.lock().await;
let stats = storage.get_stats().map_err(|e| e.to_string())?; let stats = storage.get_stats().map_err(|e| e.to_string())?;
let embedding_coverage = if stats.total_nodes > 0 { let embedding_coverage = if stats.total_nodes > 0 {
@ -88,8 +86,7 @@ async fn read_stats(storage: &Arc<Mutex<Storage>>) -> Result<String, String> {
serde_json::to_string_pretty(&result).map_err(|e| e.to_string()) serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
} }
async fn read_recent(storage: &Arc<Mutex<Storage>>, limit: i32) -> Result<String, String> { async fn read_recent(storage: &Arc<Storage>, limit: i32) -> Result<String, String> {
let storage = storage.lock().await;
let nodes = storage.get_all_nodes(limit, 0).map_err(|e| e.to_string())?; let nodes = storage.get_all_nodes(limit, 0).map_err(|e| e.to_string())?;
let items: Vec<serde_json::Value> = nodes let items: Vec<serde_json::Value> = nodes
@ -118,9 +115,7 @@ async fn read_recent(storage: &Arc<Mutex<Storage>>, limit: i32) -> Result<String
serde_json::to_string_pretty(&result).map_err(|e| e.to_string()) serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
} }
async fn read_decaying(storage: &Arc<Mutex<Storage>>) -> Result<String, String> { async fn read_decaying(storage: &Arc<Storage>) -> Result<String, String> {
let storage = storage.lock().await;
// Get nodes with low retention (below 0.5) // Get nodes with low retention (below 0.5)
let all_nodes = storage.get_all_nodes(100, 0).map_err(|e| e.to_string())?; let all_nodes = storage.get_all_nodes(100, 0).map_err(|e| e.to_string())?;
@ -176,8 +171,7 @@ async fn read_decaying(storage: &Arc<Mutex<Storage>>) -> Result<String, String>
serde_json::to_string_pretty(&result).map_err(|e| e.to_string()) serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
} }
async fn read_due(storage: &Arc<Mutex<Storage>>) -> Result<String, String> { async fn read_due(storage: &Arc<Storage>) -> Result<String, String> {
let storage = storage.lock().await;
let nodes = storage.get_review_queue(20).map_err(|e| e.to_string())?; let nodes = storage.get_review_queue(20).map_err(|e| e.to_string())?;
let items: Vec<serde_json::Value> = nodes let items: Vec<serde_json::Value> = nodes
@ -208,8 +202,7 @@ async fn read_due(storage: &Arc<Mutex<Storage>>) -> Result<String, String> {
serde_json::to_string_pretty(&result).map_err(|e| e.to_string()) serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
} }
async fn read_intentions(storage: &Arc<Mutex<Storage>>) -> Result<String, String> { async fn read_intentions(storage: &Arc<Storage>) -> Result<String, String> {
let storage = storage.lock().await;
let intentions = storage.get_active_intentions().map_err(|e| e.to_string())?; let intentions = storage.get_active_intentions().map_err(|e| e.to_string())?;
let now = chrono::Utc::now(); let now = chrono::Utc::now();
@ -247,8 +240,7 @@ async fn read_intentions(storage: &Arc<Mutex<Storage>>) -> Result<String, String
serde_json::to_string_pretty(&result).map_err(|e| e.to_string()) serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
} }
async fn read_triggered_intentions(storage: &Arc<Mutex<Storage>>) -> Result<String, String> { async fn read_triggered_intentions(storage: &Arc<Storage>) -> Result<String, String> {
let storage = storage.lock().await;
let overdue = storage.get_overdue_intentions().map_err(|e| e.to_string())?; let overdue = storage.get_overdue_intentions().map_err(|e| e.to_string())?;
let now = chrono::Utc::now(); let now = chrono::Utc::now();
@ -293,8 +285,7 @@ async fn read_triggered_intentions(storage: &Arc<Mutex<Storage>>) -> Result<Stri
serde_json::to_string_pretty(&result).map_err(|e| e.to_string()) serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
} }
async fn read_insights(storage: &Arc<Mutex<Storage>>) -> Result<String, String> { async fn read_insights(storage: &Arc<Storage>) -> Result<String, String> {
let storage = storage.lock().await;
let insights = storage.get_insights(50).map_err(|e| e.to_string())?; let insights = storage.get_insights(50).map_err(|e| e.to_string())?;
let pending: Vec<_> = insights.iter().filter(|i| i.feedback.is_none()).collect(); let pending: Vec<_> = insights.iter().filter(|i| i.feedback.is_none()).collect();
@ -327,8 +318,7 @@ async fn read_insights(storage: &Arc<Mutex<Storage>>) -> Result<String, String>
serde_json::to_string_pretty(&result).map_err(|e| e.to_string()) serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
} }
async fn read_consolidation_log(storage: &Arc<Mutex<Storage>>) -> Result<String, String> { async fn read_consolidation_log(storage: &Arc<Storage>) -> Result<String, String> {
let storage = storage.lock().await;
let history = storage.get_consolidation_history(20).map_err(|e| e.to_string())?; let history = storage.get_consolidation_history(20).map_err(|e| e.to_string())?;
let last_run = storage.get_last_consolidation().map_err(|e| e.to_string())?; let last_run = storage.get_last_consolidation().map_err(|e| e.to_string())?;

View file

@ -22,7 +22,7 @@ use vestige_core::Storage;
/// MCP Server implementation /// MCP Server implementation
pub struct McpServer { pub struct McpServer {
storage: Arc<Mutex<Storage>>, storage: Arc<Storage>,
cognitive: Arc<Mutex<CognitiveEngine>>, cognitive: Arc<Mutex<CognitiveEngine>>,
initialized: bool, initialized: bool,
/// Tool call counter for inline consolidation trigger (every 100 calls) /// Tool call counter for inline consolidation trigger (every 100 calls)
@ -30,7 +30,7 @@ pub struct McpServer {
} }
impl McpServer { impl McpServer {
pub fn new(storage: Arc<Mutex<Storage>>, cognitive: Arc<Mutex<CognitiveEngine>>) -> Self { pub fn new(storage: Arc<Storage>, cognitive: Arc<Mutex<CognitiveEngine>>) -> Self {
Self { Self {
storage, storage,
cognitive, cognitive,
@ -131,7 +131,7 @@ impl McpServer {
/// Handle tools/list request /// Handle tools/list request
async fn handle_tools_list(&self) -> Result<serde_json::Value, JsonRpcError> { async fn handle_tools_list(&self) -> Result<serde_json::Value, JsonRpcError> {
// v1.7: 18 tools. Deprecated tools still work via redirects in handle_tools_call. // v1.8: 19 tools. Deprecated tools still work via redirects in handle_tools_call.
let tools = vec![ let tools = vec![
// ================================================================ // ================================================================
// UNIFIED TOOLS (v1.1+) // UNIFIED TOOLS (v1.1+)
@ -244,6 +244,27 @@ impl McpServer {
description: Some("Restore memories from a JSON backup file. Supports MCP wrapper format, RecallResult format, and direct memory array format.".to_string()), description: Some("Restore memories from a JSON backup file. Supports MCP wrapper format, RecallResult format, and direct memory array format.".to_string()),
input_schema: tools::restore::schema(), input_schema: tools::restore::schema(),
}, },
// ================================================================
// CONTEXT PACKETS (v1.8+)
// ================================================================
ToolDescription {
name: "session_context".to_string(),
description: Some("One-call session initialization. Combines search, intentions, status, predictions, and codebase context into a single token-budgeted response. Replaces 5 separate calls at session start.".to_string()),
input_schema: tools::session_context::schema(),
},
// ================================================================
// AUTONOMIC TOOLS (v1.9+)
// ================================================================
ToolDescription {
name: "memory_health".to_string(),
description: Some("Retention dashboard. Returns avg retention, retention distribution (buckets: 0-20%, 20-40%, etc.), trend (improving/declining/stable), and recommendation. Lightweight alternative to full system_status focused on memory quality.".to_string()),
input_schema: tools::health::schema(),
},
ToolDescription {
name: "memory_graph".to_string(),
description: Some("Subgraph export for visualization. Input: center_id or query, depth (1-3), max_nodes. Returns nodes with force-directed layout positions and edges with weights. Powers memory graph visualization.".to_string()),
input_schema: tools::graph::schema(),
},
]; ];
let result = ListToolsResult { tools }; let result = ListToolsResult { tools };
@ -571,6 +592,17 @@ impl McpServer {
"predict" => tools::predict::execute(&self.storage, &self.cognitive, request.arguments).await, "predict" => tools::predict::execute(&self.storage, &self.cognitive, request.arguments).await,
"restore" => tools::restore::execute(&self.storage, request.arguments).await, "restore" => tools::restore::execute(&self.storage, request.arguments).await,
// ================================================================
// CONTEXT PACKETS (v1.8+)
// ================================================================
"session_context" => tools::session_context::execute(&self.storage, &self.cognitive, request.arguments).await,
// ================================================================
// AUTONOMIC TOOLS (v1.9+)
// ================================================================
"memory_health" => tools::health::execute(&self.storage, request.arguments).await,
"memory_graph" => tools::graph::execute(&self.storage, request.arguments).await,
name => { name => {
return Err(JsonRpcError::method_not_found_with_message(&format!( return Err(JsonRpcError::method_not_found_with_message(&format!(
"Unknown tool: {}", "Unknown tool: {}",
@ -618,8 +650,7 @@ impl McpServer {
let _expired = cog.reconsolidation.reconsolidate_expired(); let _expired = cog.reconsolidation.reconsolidate_expired();
} }
if let Ok(mut storage) = storage_clone.try_lock() { match storage_clone.run_consolidation() {
match storage.run_consolidation() {
Ok(result) => { Ok(result) => {
tracing::info!( tracing::info!(
tool_calls = count, tool_calls = count,
@ -634,7 +665,6 @@ impl McpServer {
tracing::warn!("Inline consolidation failed: {}", e); tracing::warn!("Inline consolidation failed: {}", e);
} }
} }
}
}); });
} }
@ -766,10 +796,10 @@ mod tests {
use tempfile::TempDir; use tempfile::TempDir;
/// Create a test storage instance with a temporary database /// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
/// Create a test server with temporary storage /// Create a test server with temporary storage
@ -913,8 +943,8 @@ mod tests {
let result = response.result.unwrap(); let result = response.result.unwrap();
let tools = result["tools"].as_array().unwrap(); let tools = result["tools"].as_array().unwrap();
// v1.7: 18 tools (4 unified + 1 core + 2 temporal + 5 maintenance + 2 auto-save + 3 cognitive + 1 restore) // v1.9: 21 tools (4 unified + 1 core + 2 temporal + 5 maintenance + 2 auto-save + 3 cognitive + 1 restore + 1 session_context + 2 autonomic)
assert_eq!(tools.len(), 18, "Expected exactly 18 tools in v1.7+"); assert_eq!(tools.len(), 21, "Expected exactly 21 tools in v1.9+");
let tool_names: Vec<&str> = tools let tool_names: Vec<&str> = tools
.iter() .iter()
@ -958,6 +988,13 @@ mod tests {
assert!(tool_names.contains(&"explore_connections")); assert!(tool_names.contains(&"explore_connections"));
assert!(tool_names.contains(&"predict")); assert!(tool_names.contains(&"predict"));
assert!(tool_names.contains(&"restore")); assert!(tool_names.contains(&"restore"));
// Context packets (v1.8)
assert!(tool_names.contains(&"session_context"));
// Autonomic tools (v1.9)
assert!(tool_names.contains(&"memory_health"));
assert!(tool_names.contains(&"memory_graph"));
} }
#[tokio::test] #[tokio::test]

View file

@ -8,7 +8,7 @@ use chrono::{DateTime, Utc};
use serde::Deserialize; use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use uuid::Uuid; use uuid::Uuid;
use vestige_core::Storage; use vestige_core::Storage;
@ -55,7 +55,7 @@ struct ChangelogArgs {
/// Execute memory_changelog tool /// Execute memory_changelog tool
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: ChangelogArgs = match args { let args: ChangelogArgs = match args {
@ -69,7 +69,6 @@ pub async fn execute(
}; };
let limit = args.limit.unwrap_or(20).clamp(1, 100); let limit = args.limit.unwrap_or(20).clamp(1, 100);
let storage = storage.lock().await;
if let Some(ref memory_id) = args.memory_id { if let Some(ref memory_id) = args.memory_id {
// Per-memory mode: state transitions for a specific memory // Per-memory mode: state transitions for a specific memory
@ -196,15 +195,14 @@ mod tests {
use super::*; use super::*;
use tempfile::TempDir; use tempfile::TempDir;
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
async fn ingest_test_memory(storage: &Arc<Mutex<Storage>>) -> String { async fn ingest_test_memory(storage: &Arc<Storage>) -> String {
let mut s = storage.lock().await; let node = storage
let node = s
.ingest(vestige_core::IngestInput { .ingest(vestige_core::IngestInput {
content: "Changelog test memory".to_string(), content: "Changelog test memory".to_string(),
node_type: "fact".to_string(), node_type: "fact".to_string(),

View file

@ -6,7 +6,7 @@
use serde::Deserialize; use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{IngestInput, Storage}; use vestige_core::{IngestInput, Storage};
@ -64,7 +64,7 @@ struct CheckpointItem {
} }
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: CheckpointArgs = match args { let args: CheckpointArgs = match args {
@ -80,7 +80,6 @@ pub async fn execute(
return Err("Maximum 20 items per checkpoint".to_string()); return Err("Maximum 20 items per checkpoint".to_string());
} }
let mut storage = storage.lock().await;
let mut results = Vec::new(); let mut results = Vec::new();
let mut created = 0u32; let mut created = 0u32;
let mut updated = 0u32; let mut updated = 0u32;
@ -181,10 +180,10 @@ mod tests {
use super::*; use super::*;
use tempfile::TempDir; use tempfile::TempDir;
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
#[test] #[test]

View file

@ -6,7 +6,7 @@
use serde::Deserialize; use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{IngestInput, Storage}; use vestige_core::{IngestInput, Storage};
@ -115,7 +115,7 @@ struct ContextArgs {
} }
pub async fn execute_pattern( pub async fn execute_pattern(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: PatternArgs = match args { let args: PatternArgs = match args {
@ -156,7 +156,6 @@ pub async fn execute_pattern(
valid_until: None, valid_until: None,
}; };
let mut storage = storage.lock().await;
let node = storage.ingest(input).map_err(|e| e.to_string())?; let node = storage.ingest(input).map_err(|e| e.to_string())?;
Ok(serde_json::json!({ Ok(serde_json::json!({
@ -168,7 +167,7 @@ pub async fn execute_pattern(
} }
pub async fn execute_decision( pub async fn execute_decision(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: DecisionArgs = match args { let args: DecisionArgs = match args {
@ -223,7 +222,6 @@ pub async fn execute_decision(
valid_until: None, valid_until: None,
}; };
let mut storage = storage.lock().await;
let node = storage.ingest(input).map_err(|e| e.to_string())?; let node = storage.ingest(input).map_err(|e| e.to_string())?;
Ok(serde_json::json!({ Ok(serde_json::json!({
@ -234,7 +232,7 @@ pub async fn execute_decision(
} }
pub async fn execute_context( pub async fn execute_context(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: ContextArgs = args let args: ContextArgs = args
@ -247,7 +245,6 @@ pub async fn execute_context(
}); });
let limit = args.limit.unwrap_or(10).clamp(1, 50); let limit = args.limit.unwrap_or(10).clamp(1, 50);
let storage = storage.lock().await;
// Build tag filter for codebase // Build tag filter for codebase
// Tags are stored as: ["pattern", "codebase", "codebase:vestige"] // Tags are stored as: ["pattern", "codebase", "codebase:vestige"]

View file

@ -85,7 +85,7 @@ struct CodebaseArgs {
/// Execute the unified codebase tool /// Execute the unified codebase tool
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -107,7 +107,7 @@ pub async fn execute(
/// Remember a code pattern /// Remember a code pattern
async fn execute_remember_pattern( async fn execute_remember_pattern(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: &CodebaseArgs, args: &CodebaseArgs,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -153,10 +153,8 @@ async fn execute_remember_pattern(
valid_until: None, valid_until: None,
}; };
let mut storage = storage.lock().await;
let node = storage.ingest(input).map_err(|e| e.to_string())?; let node = storage.ingest(input).map_err(|e| e.to_string())?;
let node_id = node.id.clone(); let node_id = node.id.clone();
drop(storage);
// ==================================================================== // ====================================================================
// COGNITIVE: Cross-project pattern recording // COGNITIVE: Cross-project pattern recording
@ -186,7 +184,7 @@ async fn execute_remember_pattern(
/// Remember an architectural decision /// Remember an architectural decision
async fn execute_remember_decision( async fn execute_remember_decision(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: &CodebaseArgs, args: &CodebaseArgs,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -250,10 +248,8 @@ async fn execute_remember_decision(
valid_until: None, valid_until: None,
}; };
let mut storage = storage.lock().await;
let node = storage.ingest(input).map_err(|e| e.to_string())?; let node = storage.ingest(input).map_err(|e| e.to_string())?;
let node_id = node.id.clone(); let node_id = node.id.clone();
drop(storage);
// ==================================================================== // ====================================================================
// COGNITIVE: Cross-project decision recording // COGNITIVE: Cross-project decision recording
@ -282,12 +278,11 @@ async fn execute_remember_decision(
/// Get codebase context (patterns and decisions) /// Get codebase context (patterns and decisions)
async fn execute_get_context( async fn execute_get_context(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: &CodebaseArgs, args: &CodebaseArgs,
) -> Result<Value, String> { ) -> Result<Value, String> {
let limit = args.limit.unwrap_or(10).clamp(1, 50); let limit = args.limit.unwrap_or(10).clamp(1, 50);
let storage = storage.lock().await;
// Build tag filter for codebase // Build tag filter for codebase
let tag_filter = args let tag_filter = args
@ -304,7 +299,6 @@ async fn execute_get_context(
let decisions = storage let decisions = storage
.get_nodes_by_type_and_tag("decision", tag_filter.as_deref(), limit) .get_nodes_by_type_and_tag("decision", tag_filter.as_deref(), limit)
.unwrap_or_default(); .unwrap_or_default();
drop(storage);
let formatted_patterns: Vec<Value> = patterns let formatted_patterns: Vec<Value> = patterns
.iter() .iter()
@ -403,10 +397,10 @@ mod tests {
Arc::new(Mutex::new(CognitiveEngine::new())) Arc::new(Mutex::new(CognitiveEngine::new()))
} }
async fn test_storage() -> (Arc<Mutex<Storage>>, tempfile::TempDir) { async fn test_storage() -> (Arc<Storage>, tempfile::TempDir) {
let dir = tempfile::TempDir::new().unwrap(); let dir = tempfile::TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
#[tokio::test] #[tokio::test]

View file

@ -4,7 +4,6 @@
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage; use vestige_core::Storage;
@ -16,8 +15,7 @@ pub fn schema() -> Value {
}) })
} }
pub async fn execute(storage: &Arc<Mutex<Storage>>) -> Result<Value, String> { pub async fn execute(storage: &Arc<Storage>) -> Result<Value, String> {
let mut storage = storage.lock().await;
let result = storage.run_consolidation().map_err(|e| e.to_string())?; let result = storage.run_consolidation().map_err(|e| e.to_string())?;
Ok(serde_json::json!({ Ok(serde_json::json!({

View file

@ -6,7 +6,7 @@
use chrono::Utc; use chrono::Utc;
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{RecallInput, SearchMode, Storage}; use vestige_core::{RecallInput, SearchMode, Storage};
@ -51,7 +51,7 @@ pub fn schema() -> Value {
} }
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args = args.ok_or("Missing arguments")?; let args = args.ok_or("Missing arguments")?;
@ -73,7 +73,6 @@ pub async fn execute(
let limit = args["limit"].as_i64().unwrap_or(10) as i32; let limit = args["limit"].as_i64().unwrap_or(10) as i32;
let storage = storage.lock().await;
let now = Utc::now(); let now = Utc::now();
// Get candidate memories // Get candidate memories

View file

@ -8,7 +8,7 @@ use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::collections::HashMap; use std::collections::HashMap;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage; use vestige_core::Storage;
#[cfg(all(feature = "embeddings", feature = "vector-search"))] #[cfg(all(feature = "embeddings", feature = "vector-search"))]
@ -89,7 +89,7 @@ impl UnionFind {
} }
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: DedupArgs = match args { let args: DedupArgs = match args {
@ -107,7 +107,6 @@ pub async fn execute(
#[cfg(all(feature = "embeddings", feature = "vector-search"))] #[cfg(all(feature = "embeddings", feature = "vector-search"))]
{ {
let storage = storage.lock().await;
// Load all embeddings // Load all embeddings
let all_embeddings = storage let all_embeddings = storage
@ -300,7 +299,7 @@ mod tests {
async fn test_empty_storage() { async fn test_empty_storage() {
let dir = tempfile::TempDir::new().unwrap(); let dir = tempfile::TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
let storage = Arc::new(Mutex::new(storage)); let storage = Arc::new(storage);
let result = execute(&storage, None).await; let result = execute(&storage, None).await;
assert!(result.is_ok()); assert!(result.is_ok());
} }

View file

@ -22,7 +22,7 @@ pub fn schema() -> serde_json::Value {
} }
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<serde_json::Value>, args: Option<serde_json::Value>,
) -> Result<serde_json::Value, String> { ) -> Result<serde_json::Value, String> {
@ -32,10 +32,42 @@ pub async fn execute(
.and_then(|v| v.as_u64()) .and_then(|v| v.as_u64())
.unwrap_or(50) as usize; .unwrap_or(50) as usize;
let storage_guard = storage.lock().await; // v1.9.0: Waking SWR tagging — preferential replay of tagged memories (70/30 split)
let all_nodes = storage_guard.get_all_nodes(memory_count as i32, 0) let tagged_nodes = storage.get_waking_tagged_memories(memory_count as i32)
.unwrap_or_default();
let tagged_count = tagged_nodes.len();
// Calculate how many tagged vs random to include
let tagged_target = (memory_count * 7 / 10).min(tagged_count); // 70% tagged
let _random_target = memory_count.saturating_sub(tagged_target); // 30% random (used for logging)
// Build the dream memory set: tagged memories first, then fill with random
let tagged_ids: std::collections::HashSet<String> = tagged_nodes.iter()
.take(tagged_target)
.map(|n| n.id.clone())
.collect();
let random_nodes = storage.get_all_nodes(memory_count as i32, 0)
.map_err(|e| format!("Failed to load memories: {}", e))?; .map_err(|e| format!("Failed to load memories: {}", e))?;
let mut all_nodes: Vec<_> = tagged_nodes.into_iter().take(tagged_target).collect();
for node in random_nodes {
if !tagged_ids.contains(&node.id) && all_nodes.len() < memory_count {
all_nodes.push(node);
}
}
// If still under capacity (e.g., all memories are tagged), fill from remaining tagged
if all_nodes.len() < memory_count {
let used_ids: std::collections::HashSet<String> = all_nodes.iter().map(|n| n.id.clone()).collect();
let remaining_tagged = storage.get_waking_tagged_memories(memory_count as i32)
.unwrap_or_default();
for node in remaining_tagged {
if !used_ids.contains(&node.id) && all_nodes.len() < memory_count {
all_nodes.push(node);
}
}
}
if all_nodes.len() < 5 { if all_nodes.len() < 5 {
return Ok(serde_json::json!({ return Ok(serde_json::json!({
"status": "insufficient_memories", "status": "insufficient_memories",
@ -48,23 +80,57 @@ pub async fn execute(
vestige_core::DreamMemory { vestige_core::DreamMemory {
id: n.id.clone(), id: n.id.clone(),
content: n.content.clone(), content: n.content.clone(),
embedding: storage_guard.get_node_embedding(&n.id).ok().flatten(), embedding: storage.get_node_embedding(&n.id).ok().flatten(),
tags: n.tags.clone(), tags: n.tags.clone(),
created_at: n.created_at, created_at: n.created_at,
access_count: n.reps as u32, access_count: n.reps as u32,
} }
}).collect(); }).collect();
// Drop storage lock before taking cognitive lock (strict ordering)
drop(storage_guard);
let cog = cognitive.lock().await; let cog = cognitive.lock().await;
let pre_dream_count = cog.dreamer.get_connections().len();
let dream_result = cog.dreamer.dream(&dream_memories).await; let dream_result = cog.dreamer.dream(&dream_memories).await;
let insights = cog.dreamer.synthesize_insights(&dream_memories); let insights = cog.dreamer.synthesize_insights(&dream_memories);
let all_connections = cog.dreamer.get_connections();
drop(cog); drop(cog);
// v1.9.0: Persist only NEW connections from this dream (skip accumulated ones)
let new_connections = &all_connections[pre_dream_count..];
let mut connections_persisted = 0u64;
{
let now = Utc::now();
for conn in new_connections {
let link_type = match conn.connection_type {
vestige_core::DiscoveredConnectionType::Semantic => "semantic",
vestige_core::DiscoveredConnectionType::SharedConcept => "shared_concepts",
vestige_core::DiscoveredConnectionType::Temporal => "temporal",
vestige_core::DiscoveredConnectionType::Complementary => "complementary",
vestige_core::DiscoveredConnectionType::CausalChain => "causal",
};
let record = vestige_core::ConnectionRecord {
source_id: conn.from_id.clone(),
target_id: conn.to_id.clone(),
strength: conn.similarity,
link_type: link_type.to_string(),
created_at: now,
last_activated: now,
activation_count: 1,
};
if storage.save_connection(&record).is_ok() {
connections_persisted += 1;
}
}
if connections_persisted > 0 {
tracing::info!(
connections_persisted = connections_persisted,
"Dream: persisted {} connections to database",
connections_persisted
);
}
}
// Persist dream history (non-fatal on failure — dream still happened) // Persist dream history (non-fatal on failure — dream still happened)
{ {
let mut storage_guard = storage.lock().await;
let record = DreamHistoryRecord { let record = DreamHistoryRecord {
dreamed_at: Utc::now(), dreamed_at: Utc::now(),
duration_ms: dream_result.duration_ms as i64, duration_ms: dream_result.duration_ms as i64,
@ -74,14 +140,19 @@ pub async fn execute(
memories_strengthened: dream_result.memories_strengthened as i32, memories_strengthened: dream_result.memories_strengthened as i32,
memories_compressed: dream_result.memories_compressed as i32, memories_compressed: dream_result.memories_compressed as i32,
}; };
if let Err(e) = storage_guard.save_dream_history(&record) { if let Err(e) = storage.save_dream_history(&record) {
tracing::warn!("Failed to persist dream history: {}", e); tracing::warn!("Failed to persist dream history: {}", e);
} }
} }
// v1.9.0: Clear waking tags after dream processes them
let tags_cleared = storage.clear_waking_tags().unwrap_or(0);
Ok(serde_json::json!({ Ok(serde_json::json!({
"status": "dreamed", "status": "dreamed",
"memoriesReplayed": dream_memories.len(), "memoriesReplayed": dream_memories.len(),
"wakingTagsProcessed": tagged_target,
"wakingTagsCleared": tags_cleared,
"insights": insights.iter().map(|i| serde_json::json!({ "insights": insights.iter().map(|i| serde_json::json!({
"insight_type": format!("{:?}", i.insight_type), "insight_type": format!("{:?}", i.insight_type),
"insight": i.insight, "insight": i.insight,
@ -89,8 +160,10 @@ pub async fn execute(
"confidence": i.confidence, "confidence": i.confidence,
"novelty_score": i.novelty_score, "novelty_score": i.novelty_score,
})).collect::<Vec<_>>(), })).collect::<Vec<_>>(),
"connectionsPersisted": connections_persisted,
"stats": { "stats": {
"new_connections_found": dream_result.new_connections_found, "new_connections_found": dream_result.new_connections_found,
"connections_persisted": connections_persisted,
"memories_strengthened": dream_result.memories_strengthened, "memories_strengthened": dream_result.memories_strengthened,
"memories_compressed": dream_result.memories_compressed, "memories_compressed": dream_result.memories_compressed,
"insights_generated": dream_result.insights_generated.len(), "insights_generated": dream_result.insights_generated.len(),
@ -109,16 +182,15 @@ mod tests {
Arc::new(Mutex::new(CognitiveEngine::new())) Arc::new(Mutex::new(CognitiveEngine::new()))
} }
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
async fn ingest_n_memories(storage: &Arc<Mutex<Storage>>, n: usize) { async fn ingest_n_memories(storage: &Arc<Storage>, n: usize) {
let mut s = storage.lock().await;
for i in 0..n { for i in 0..n {
s.ingest(vestige_core::IngestInput { storage.ingest(vestige_core::IngestInput {
content: format!("Dream test memory number {}", i), content: format!("Dream test memory number {}", i),
node_type: "fact".to_string(), node_type: "fact".to_string(),
source: None, source: None,
@ -216,8 +288,7 @@ mod tests {
// Before dream: no dream history // Before dream: no dream history
{ {
let s = storage.lock().await; assert!(storage.get_last_dream().unwrap().is_none());
assert!(s.get_last_dream().unwrap().is_none());
} }
let result = execute(&storage, &test_cognitive(), None).await; let result = execute(&storage, &test_cognitive(), None).await;
@ -227,8 +298,7 @@ mod tests {
// After dream: dream history should exist // After dream: dream history should exist
{ {
let s = storage.lock().await; let last = storage.get_last_dream().unwrap();
let last = s.get_last_dream().unwrap();
assert!(last.is_some(), "Dream should have been persisted to database"); assert!(last.is_some(), "Dream should have been persisted to database");
} }
} }

View file

@ -35,7 +35,7 @@ pub fn schema() -> serde_json::Value {
} }
pub async fn execute( pub async fn execute(
_storage: &Arc<Mutex<Storage>>, _storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<serde_json::Value>, args: Option<serde_json::Value>,
) -> Result<serde_json::Value, String> { ) -> Result<serde_json::Value, String> {
@ -137,10 +137,10 @@ mod tests {
Arc::new(Mutex::new(CognitiveEngine::new())) Arc::new(Mutex::new(CognitiveEngine::new()))
} }
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
#[test] #[test]

View file

@ -61,7 +61,7 @@ struct FeedbackArgs {
/// Promote a memory (thumbs up) - it led to a good outcome /// Promote a memory (thumbs up) - it led to a good outcome
pub async fn execute_promote( pub async fn execute_promote(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -73,14 +73,12 @@ pub async fn execute_promote(
// Validate UUID // Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?; uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
let storage_guard = storage.lock().await;
// Get node before for comparison // Get node before for comparison
let before = storage_guard.get_node(&args.id).map_err(|e| e.to_string())? let before = storage.get_node(&args.id).map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", args.id))?; .ok_or_else(|| format!("Node not found: {}", args.id))?;
let node = storage_guard.promote_memory(&args.id).map_err(|e| e.to_string())?; let node = storage.promote_memory(&args.id).map_err(|e| e.to_string())?;
drop(storage_guard);
// ==================================================================== // ====================================================================
// COGNITIVE FEEDBACK PIPELINE (promote) // COGNITIVE FEEDBACK PIPELINE (promote)
@ -133,7 +131,7 @@ pub async fn execute_promote(
/// Demote a memory (thumbs down) - it led to a bad outcome /// Demote a memory (thumbs down) - it led to a bad outcome
pub async fn execute_demote( pub async fn execute_demote(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -145,14 +143,12 @@ pub async fn execute_demote(
// Validate UUID // Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?; uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
let storage_guard = storage.lock().await;
// Get node before for comparison // Get node before for comparison
let before = storage_guard.get_node(&args.id).map_err(|e| e.to_string())? let before = storage.get_node(&args.id).map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", args.id))?; .ok_or_else(|| format!("Node not found: {}", args.id))?;
let node = storage_guard.demote_memory(&args.id).map_err(|e| e.to_string())?; let node = storage.demote_memory(&args.id).map_err(|e| e.to_string())?;
drop(storage_guard);
// ==================================================================== // ====================================================================
// COGNITIVE FEEDBACK PIPELINE (demote) // COGNITIVE FEEDBACK PIPELINE (demote)
@ -230,7 +226,7 @@ struct RequestFeedbackArgs {
/// Request feedback from the user about a memory's usefulness /// Request feedback from the user about a memory's usefulness
/// Returns a structured prompt for Claude to ask the user /// Returns a structured prompt for Claude to ask the user
pub async fn execute_request_feedback( pub async fn execute_request_feedback(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: RequestFeedbackArgs = match args { let args: RequestFeedbackArgs = match args {
@ -241,7 +237,6 @@ pub async fn execute_request_feedback(
// Validate UUID // Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?; uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
let storage = storage.lock().await;
let node = storage.get_node(&args.id).map_err(|e| e.to_string())? let node = storage.get_node(&args.id).map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", args.id))?; .ok_or_else(|| format!("Node not found: {}", args.id))?;
@ -294,15 +289,14 @@ mod tests {
Arc::new(Mutex::new(CognitiveEngine::new())) Arc::new(Mutex::new(CognitiveEngine::new()))
} }
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
async fn ingest_test_memory(storage: &Arc<Mutex<Storage>>) -> String { async fn ingest_test_memory(storage: &Arc<Storage>) -> String {
let mut s = storage.lock().await; let node = storage
let node = s
.ingest(vestige_core::IngestInput { .ingest(vestige_core::IngestInput {
content: "Test memory for feedback".to_string(), content: "Test memory for feedback".to_string(),
node_type: "fact".to_string(), node_type: "fact".to_string(),
@ -542,8 +536,7 @@ mod tests {
async fn test_request_feedback_truncates_long_content() { async fn test_request_feedback_truncates_long_content() {
let (storage, _dir) = test_storage().await; let (storage, _dir) = test_storage().await;
let long_content = "A".repeat(200); let long_content = "A".repeat(200);
let mut s = storage.lock().await; let node = storage
let node = s
.ingest(vestige_core::IngestInput { .ingest(vestige_core::IngestInput {
content: long_content, content: long_content,
node_type: "fact".to_string(), node_type: "fact".to_string(),
@ -555,9 +548,9 @@ mod tests {
valid_until: None, valid_until: None,
}) })
.unwrap(); .unwrap();
drop(s); let node_id = node.id.clone();
let args = serde_json::json!({ "id": node.id }); let args = serde_json::json!({ "id": node_id });
let result = execute_request_feedback(&storage, Some(args)).await; let result = execute_request_feedback(&storage, Some(args)).await;
let value = result.unwrap(); let value = result.unwrap();
let preview = value["memoryPreview"].as_str().unwrap(); let preview = value["memoryPreview"].as_str().unwrap();

View file

@ -0,0 +1,359 @@
//! memory_graph tool — Subgraph export with force-directed layout for visualization.
//! v1.9.0: Computes Fruchterman-Reingold layout server-side.
use std::sync::Arc;
use vestige_core::Storage;
pub fn schema() -> serde_json::Value {
serde_json::json!({
"type": "object",
"properties": {
"center_id": {
"type": "string",
"description": "Memory ID to center the graph on. Required if no query."
},
"query": {
"type": "string",
"description": "Search query to find center node. Used if center_id not provided."
},
"depth": {
"type": "integer",
"description": "How many hops from center to include (1-3, default: 2)",
"default": 2,
"minimum": 1,
"maximum": 3
},
"max_nodes": {
"type": "integer",
"description": "Maximum number of nodes to include (default: 50)",
"default": 50,
"maximum": 200
}
}
})
}
/// Simple Fruchterman-Reingold force-directed layout
fn fruchterman_reingold(
node_count: usize,
edges: &[(usize, usize, f64)],
width: f64,
height: f64,
iterations: usize,
) -> Vec<(f64, f64)> {
if node_count == 0 {
return Vec::new();
}
if node_count == 1 {
return vec![(width / 2.0, height / 2.0)];
}
let area = width * height;
let k = (area / node_count as f64).sqrt();
// Initialize positions in a circle
let mut positions: Vec<(f64, f64)> = (0..node_count)
.map(|i| {
let angle = 2.0 * std::f64::consts::PI * i as f64 / node_count as f64;
(
width / 2.0 + (width / 3.0) * angle.cos(),
height / 2.0 + (height / 3.0) * angle.sin(),
)
})
.collect();
let mut temperature = width / 10.0;
let cooling = temperature / iterations as f64;
for _ in 0..iterations {
let mut displacements = vec![(0.0f64, 0.0f64); node_count];
// Repulsive forces between all pairs
for i in 0..node_count {
for j in (i + 1)..node_count {
let dx = positions[i].0 - positions[j].0;
let dy = positions[i].1 - positions[j].1;
let dist = (dx * dx + dy * dy).sqrt().max(0.01);
let force = k * k / dist;
let fx = dx / dist * force;
let fy = dy / dist * force;
displacements[i].0 += fx;
displacements[i].1 += fy;
displacements[j].0 -= fx;
displacements[j].1 -= fy;
}
}
// Attractive forces along edges
for &(u, v, weight) in edges {
let dx = positions[u].0 - positions[v].0;
let dy = positions[u].1 - positions[v].1;
let dist = (dx * dx + dy * dy).sqrt().max(0.01);
let force = dist * dist / k * weight;
let fx = dx / dist * force;
let fy = dy / dist * force;
displacements[u].0 -= fx;
displacements[u].1 -= fy;
displacements[v].0 += fx;
displacements[v].1 += fy;
}
// Apply displacements with temperature limiting
for i in 0..node_count {
let dx = displacements[i].0;
let dy = displacements[i].1;
let dist = (dx * dx + dy * dy).sqrt().max(0.01);
let capped = dist.min(temperature);
positions[i].0 += dx / dist * capped;
positions[i].1 += dy / dist * capped;
// Clamp to bounds
positions[i].0 = positions[i].0.clamp(10.0, width - 10.0);
positions[i].1 = positions[i].1.clamp(10.0, height - 10.0);
}
temperature -= cooling;
if temperature < 0.1 {
break;
}
}
positions
}
pub async fn execute(
storage: &Arc<Storage>,
args: Option<serde_json::Value>,
) -> Result<serde_json::Value, String> {
let depth = args.as_ref()
.and_then(|a| a.get("depth"))
.and_then(|v| v.as_u64())
.unwrap_or(2)
.min(3) as u32;
let max_nodes = args.as_ref()
.and_then(|a| a.get("max_nodes"))
.and_then(|v| v.as_u64())
.unwrap_or(50)
.min(200) as usize;
// Determine center node
let center_id = if let Some(id) = args.as_ref().and_then(|a| a.get("center_id")).and_then(|v| v.as_str()) {
id.to_string()
} else if let Some(query) = args.as_ref().and_then(|a| a.get("query")).and_then(|v| v.as_str()) {
// Search for center node
let results = storage.search(query, 1)
.map_err(|e| format!("Search failed: {}", e))?;
results.first()
.map(|n| n.id.clone())
.ok_or_else(|| "No memories found matching query".to_string())?
} else {
// Default: use the most recent memory
let recent = storage.get_all_nodes(1, 0)
.map_err(|e| format!("Failed to get recent node: {}", e))?;
recent.first()
.map(|n| n.id.clone())
.ok_or_else(|| "No memories in database".to_string())?
};
// Get subgraph
let (nodes, edges) = storage.get_memory_subgraph(&center_id, depth, max_nodes)
.map_err(|e| format!("Failed to get subgraph: {}", e))?;
if nodes.is_empty() || !nodes.iter().any(|n| n.id == center_id) {
return Err(format!("Memory '{}' not found or has no accessible data", center_id));
}
// Build index map for FR layout
let id_to_idx: std::collections::HashMap<&str, usize> = nodes.iter()
.enumerate()
.map(|(i, n)| (n.id.as_str(), i))
.collect();
let layout_edges: Vec<(usize, usize, f64)> = edges.iter()
.filter_map(|e| {
let u = id_to_idx.get(e.source_id.as_str())?;
let v = id_to_idx.get(e.target_id.as_str())?;
Some((*u, *v, e.strength))
})
.collect();
// Compute force-directed layout
let positions = fruchterman_reingold(nodes.len(), &layout_edges, 800.0, 600.0, 50);
// Build response
let nodes_json: Vec<serde_json::Value> = nodes.iter()
.enumerate()
.map(|(i, n)| {
let (x, y) = positions.get(i).copied().unwrap_or((400.0, 300.0));
serde_json::json!({
"id": n.id,
"label": if n.content.chars().count() > 60 {
format!("{}...", n.content.chars().take(57).collect::<String>())
} else {
n.content.clone()
},
"type": n.node_type,
"retention": n.retention_strength,
"tags": n.tags,
"x": (x * 100.0).round() / 100.0,
"y": (y * 100.0).round() / 100.0,
"isCenter": n.id == center_id,
})
})
.collect();
let edges_json: Vec<serde_json::Value> = edges.iter()
.map(|e| {
serde_json::json!({
"source": e.source_id,
"target": e.target_id,
"weight": e.strength,
"type": e.link_type,
})
})
.collect();
Ok(serde_json::json!({
"nodes": nodes_json,
"edges": edges_json,
"center_id": center_id,
"depth": depth,
"nodeCount": nodes.len(),
"edgeCount": edges.len(),
}))
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(storage), dir)
}
#[test]
fn test_schema_is_valid() {
let s = schema();
assert_eq!(s["type"], "object");
assert!(s["properties"]["center_id"].is_object());
assert!(s["properties"]["query"].is_object());
assert!(s["properties"]["depth"].is_object());
assert!(s["properties"]["max_nodes"].is_object());
}
#[test]
fn test_fruchterman_reingold_empty() {
let positions = fruchterman_reingold(0, &[], 800.0, 600.0, 50);
assert!(positions.is_empty());
}
#[test]
fn test_fruchterman_reingold_single_node() {
let positions = fruchterman_reingold(1, &[], 800.0, 600.0, 50);
assert_eq!(positions.len(), 1);
assert!((positions[0].0 - 400.0).abs() < 0.01);
assert!((positions[0].1 - 300.0).abs() < 0.01);
}
#[test]
fn test_fruchterman_reingold_two_nodes() {
let edges = vec![(0, 1, 1.0)];
let positions = fruchterman_reingold(2, &edges, 800.0, 600.0, 50);
assert_eq!(positions.len(), 2);
// Nodes should be within bounds
for (x, y) in &positions {
assert!(*x >= 10.0 && *x <= 790.0);
assert!(*y >= 10.0 && *y <= 590.0);
}
}
#[test]
fn test_fruchterman_reingold_connected_graph() {
let edges = vec![(0, 1, 1.0), (1, 2, 1.0), (2, 0, 1.0)];
let positions = fruchterman_reingold(3, &edges, 800.0, 600.0, 50);
assert_eq!(positions.len(), 3);
// Connected nodes should be closer than disconnected nodes in a larger graph
for (x, y) in &positions {
assert!(*x >= 10.0 && *x <= 790.0);
assert!(*y >= 10.0 && *y <= 590.0);
}
}
#[tokio::test]
async fn test_graph_empty_database() {
let (storage, _dir) = test_storage().await;
let result = execute(&storage, None).await;
assert!(result.is_err()); // No memories to center on
}
#[tokio::test]
async fn test_graph_with_center_id() {
let (storage, _dir) = test_storage().await;
let node = storage.ingest(vestige_core::IngestInput {
content: "Graph test memory".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
}).unwrap();
let args = serde_json::json!({ "center_id": node.id });
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["center_id"], node.id);
assert_eq!(value["nodeCount"], 1);
let nodes = value["nodes"].as_array().unwrap();
assert_eq!(nodes.len(), 1);
assert_eq!(nodes[0]["isCenter"], true);
}
#[tokio::test]
async fn test_graph_with_query() {
let (storage, _dir) = test_storage().await;
storage.ingest(vestige_core::IngestInput {
content: "Quantum computing fundamentals".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["science".to_string()],
valid_from: None,
valid_until: None,
}).unwrap();
let args = serde_json::json!({ "query": "quantum" });
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert!(value["nodeCount"].as_u64().unwrap() >= 1);
}
#[tokio::test]
async fn test_graph_node_has_position() {
let (storage, _dir) = test_storage().await;
let node = storage.ingest(vestige_core::IngestInput {
content: "Position test memory".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
}).unwrap();
let args = serde_json::json!({ "center_id": node.id });
let result = execute(&storage, Some(args)).await.unwrap();
let nodes = result["nodes"].as_array().unwrap();
assert!(nodes[0]["x"].is_number());
assert!(nodes[0]["y"].is_number());
}
}

View file

@ -0,0 +1,150 @@
//! memory_health tool — Retention dashboard for memory quality monitoring.
//! v1.9.0: Lightweight alternative to full system_status focused on memory health.
use std::sync::Arc;
use vestige_core::Storage;
pub fn schema() -> serde_json::Value {
serde_json::json!({
"type": "object",
"properties": {}
})
}
pub async fn execute(
storage: &Arc<Storage>,
_args: Option<serde_json::Value>,
) -> Result<serde_json::Value, String> {
// Average retention
let avg_retention = storage.get_avg_retention()
.map_err(|e| format!("Failed to get avg retention: {}", e))?;
// Retention distribution
let distribution = storage.get_retention_distribution()
.map_err(|e| format!("Failed to get retention distribution: {}", e))?;
let distribution_json: serde_json::Value = distribution.iter().map(|(bucket, count)| {
serde_json::json!({ "bucket": bucket, "count": count })
}).collect();
// Retention trend
let trend = storage.get_retention_trend()
.unwrap_or_else(|_| "unknown".to_string());
// Total memories and those below key thresholds
let stats = storage.get_stats()
.map_err(|e| format!("Failed to get stats: {}", e))?;
let below_30 = storage.count_memories_below_retention(0.3).unwrap_or(0);
let below_50 = storage.count_memories_below_retention(0.5).unwrap_or(0);
// Retention target
let retention_target: f64 = std::env::var("VESTIGE_RETENTION_TARGET")
.ok()
.and_then(|v| v.parse().ok())
.unwrap_or(0.8);
let meets_target = avg_retention >= retention_target;
// Generate recommendation
let recommendation = if avg_retention >= 0.8 {
"Excellent memory health. Retention is strong across the board."
} else if avg_retention >= 0.6 {
"Good memory health. Consider reviewing memories in the 0-40% range."
} else if avg_retention >= 0.4 {
"Fair memory health. Many memories are decaying. Run consolidation and consider GC."
} else {
"Poor memory health. Urgent: run consolidation, then GC stale memories below 0.3."
};
Ok(serde_json::json!({
"avgRetention": format!("{:.1}%", avg_retention * 100.0),
"avgRetentionRaw": avg_retention,
"retentionTarget": retention_target,
"meetsTarget": meets_target,
"totalMemories": stats.total_nodes,
"distribution": distribution_json,
"trend": trend,
"memoriesBelow30pct": below_30,
"memoriesBelow50pct": below_50,
"recommendation": recommendation,
}))
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(storage), dir)
}
#[test]
fn test_schema_is_valid() {
let s = schema();
assert_eq!(s["type"], "object");
}
#[tokio::test]
async fn test_health_empty_database() {
let (storage, _dir) = test_storage().await;
let result = execute(&storage, None).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["totalMemories"], 0);
assert!(value["avgRetention"].is_string());
assert!(value["recommendation"].is_string());
}
#[tokio::test]
async fn test_health_with_memories() {
let (storage, _dir) = test_storage().await;
// Ingest some test memories
for i in 0..5 {
storage.ingest(vestige_core::IngestInput {
content: format!("Health test memory {}", i),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["test".to_string()],
valid_from: None,
valid_until: None,
}).unwrap();
}
let result = execute(&storage, None).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["totalMemories"], 5);
assert!(value["distribution"].is_array());
assert!(value["meetsTarget"].is_boolean());
}
#[tokio::test]
async fn test_health_distribution_buckets() {
let (storage, _dir) = test_storage().await;
storage.ingest(vestige_core::IngestInput {
content: "Test memory for distribution".to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
}).unwrap();
let result = execute(&storage, None).await.unwrap();
let dist = result["distribution"].as_array().unwrap();
// Should have at least one bucket with data
assert!(!dist.is_empty());
let total: i64 = dist.iter()
.map(|b| b["count"].as_i64().unwrap_or(0))
.sum();
assert_eq!(total, 1);
}
}

View file

@ -47,7 +47,7 @@ struct ImportanceArgs {
} }
pub async fn execute( pub async fn execute(
_storage: &Arc<Mutex<Storage>>, _storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -137,18 +137,18 @@ mod tests {
#[tokio::test] #[tokio::test]
async fn test_empty_content_fails() { async fn test_empty_content_fails() {
let storage = Arc::new(Mutex::new( let storage = Arc::new(
Storage::new(Some(std::path::PathBuf::from("/tmp/test_importance.db"))).unwrap(), Storage::new(Some(std::path::PathBuf::from("/tmp/test_importance.db"))).unwrap(),
)); );
let result = execute(&storage, &test_cognitive(), Some(serde_json::json!({ "content": "" }))).await; let result = execute(&storage, &test_cognitive(), Some(serde_json::json!({ "content": "" }))).await;
assert!(result.is_err()); assert!(result.is_err());
} }
#[tokio::test] #[tokio::test]
async fn test_basic_importance_score() { async fn test_basic_importance_score() {
let storage = Arc::new(Mutex::new( let storage = Arc::new(
Storage::new(Some(std::path::PathBuf::from("/tmp/test_importance2.db"))).unwrap(), Storage::new(Some(std::path::PathBuf::from("/tmp/test_importance2.db"))).unwrap(),
)); );
let result = execute( let result = execute(
&storage, &storage,
&test_cognitive(), &test_cognitive(),

View file

@ -55,7 +55,7 @@ struct IngestArgs {
} }
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -123,20 +123,18 @@ pub async fn execute(
// ==================================================================== // ====================================================================
// INGEST (storage lock) // INGEST (storage lock)
// ==================================================================== // ====================================================================
let mut storage_guard = storage.lock().await;
// Route through smart_ingest when embeddings are available to prevent duplicates. // Route through smart_ingest when embeddings are available to prevent duplicates.
// Falls back to raw ingest only when embeddings aren't ready. // Falls back to raw ingest only when embeddings aren't ready.
#[cfg(all(feature = "embeddings", feature = "vector-search"))] #[cfg(all(feature = "embeddings", feature = "vector-search"))]
{ {
let fallback_input = input.clone(); let fallback_input = input.clone();
match storage_guard.smart_ingest(input) { match storage.smart_ingest(input) {
Ok(result) => { Ok(result) => {
let node_id = result.node.id.clone(); let node_id = result.node.id.clone();
let node_content = result.node.content.clone(); let node_content = result.node.content.clone();
let node_type = result.node.node_type.clone(); let node_type = result.node.node_type.clone();
let has_embedding = result.node.has_embedding.unwrap_or(false); let has_embedding = result.node.has_embedding.unwrap_or(false);
drop(storage_guard);
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite); run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
@ -153,12 +151,11 @@ pub async fn execute(
})) }))
} }
Err(_) => { Err(_) => {
let node = storage_guard.ingest(fallback_input).map_err(|e| e.to_string())?; let node = storage.ingest(fallback_input).map_err(|e| e.to_string())?;
let node_id = node.id.clone(); let node_id = node.id.clone();
let node_content = node.content.clone(); let node_content = node.content.clone();
let node_type = node.node_type.clone(); let node_type = node.node_type.clone();
let has_embedding = node.has_embedding.unwrap_or(false); let has_embedding = node.has_embedding.unwrap_or(false);
drop(storage_guard);
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite); run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
@ -178,12 +175,11 @@ pub async fn execute(
// Fallback for builds without embedding features // Fallback for builds without embedding features
#[cfg(not(all(feature = "embeddings", feature = "vector-search")))] #[cfg(not(all(feature = "embeddings", feature = "vector-search")))]
{ {
let node = storage_guard.ingest(input).map_err(|e| e.to_string())?; let node = storage.ingest(input).map_err(|e| e.to_string())?;
let node_id = node.id.clone(); let node_id = node.id.clone();
let node_content = node.content.clone(); let node_content = node.content.clone();
let node_type = node.node_type.clone(); let node_type = node.node_type.clone();
let has_embedding = node.has_embedding.unwrap_or(false); let has_embedding = node.has_embedding.unwrap_or(false);
drop(storage_guard);
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite); run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
@ -249,10 +245,10 @@ mod tests {
} }
/// Create a test storage instance with a temporary database /// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
// ======================================================================== // ========================================================================
@ -412,8 +408,7 @@ mod tests {
// Verify node was created - the default type is "fact" // Verify node was created - the default type is "fact"
let node_id = result.unwrap()["nodeId"].as_str().unwrap().to_string(); let node_id = result.unwrap()["nodeId"].as_str().unwrap().to_string();
let storage_lock = storage.lock().await; let node = storage.get_node(&node_id).unwrap().unwrap();
let node = storage_lock.get_node(&node_id).unwrap().unwrap();
assert_eq!(node.node_type, "fact"); assert_eq!(node.node_type, "fact");
} }

View file

@ -199,7 +199,7 @@ struct UnifiedIntentionArgs {
/// Execute the unified intention tool /// Execute the unified intention tool
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -226,7 +226,7 @@ pub async fn execute(
/// Execute "set" action - create a new intention /// Execute "set" action - create a new intention
async fn execute_set( async fn execute_set(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: &UnifiedIntentionArgs, args: &UnifiedIntentionArgs,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -382,7 +382,6 @@ async fn execute_set(
source_data: None, source_data: None,
}; };
let mut storage = storage.lock().await;
storage.save_intention(&record).map_err(|e| e.to_string())?; storage.save_intention(&record).map_err(|e| e.to_string())?;
Ok(serde_json::json!({ Ok(serde_json::json!({
@ -399,7 +398,7 @@ async fn execute_set(
/// Execute "check" action - find triggered intentions /// Execute "check" action - find triggered intentions
async fn execute_check( async fn execute_check(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: &UnifiedIntentionArgs, args: &UnifiedIntentionArgs,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -425,7 +424,6 @@ async fn execute_check(
let _ = cog.prospective_memory.update_context(prospective_ctx); let _ = cog.prospective_memory.update_context(prospective_ctx);
} }
let storage = storage.lock().await;
// Get active intentions // Get active intentions
let intentions = storage.get_active_intentions().map_err(|e| e.to_string())?; let intentions = storage.get_active_intentions().map_err(|e| e.to_string())?;
@ -518,7 +516,7 @@ async fn execute_check(
/// Execute "update" action - complete, snooze, or cancel an intention /// Execute "update" action - complete, snooze, or cancel an intention
async fn execute_update( async fn execute_update(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: &UnifiedIntentionArgs, args: &UnifiedIntentionArgs,
) -> Result<Value, String> { ) -> Result<Value, String> {
let intention_id = args let intention_id = args
@ -533,7 +531,6 @@ async fn execute_update(
match status.as_str() { match status.as_str() {
"complete" => { "complete" => {
let mut storage = storage.lock().await;
let updated = storage let updated = storage
.update_intention_status(intention_id, "fulfilled") .update_intention_status(intention_id, "fulfilled")
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
@ -554,7 +551,6 @@ async fn execute_update(
let minutes = args.snooze_minutes.unwrap_or(30); let minutes = args.snooze_minutes.unwrap_or(30);
let snooze_until = Utc::now() + Duration::minutes(minutes); let snooze_until = Utc::now() + Duration::minutes(minutes);
let mut storage = storage.lock().await;
let updated = storage let updated = storage
.snooze_intention(intention_id, snooze_until) .snooze_intention(intention_id, snooze_until)
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
@ -573,7 +569,6 @@ async fn execute_update(
} }
} }
"cancel" => { "cancel" => {
let mut storage = storage.lock().await;
let updated = storage let updated = storage
.update_intention_status(intention_id, "cancelled") .update_intention_status(intention_id, "cancelled")
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
@ -599,11 +594,10 @@ async fn execute_update(
/// Execute "list" action - list intentions with optional filtering /// Execute "list" action - list intentions with optional filtering
async fn execute_list( async fn execute_list(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: &UnifiedIntentionArgs, args: &UnifiedIntentionArgs,
) -> Result<Value, String> { ) -> Result<Value, String> {
let filter_status = args.filter_status.as_deref().unwrap_or("active"); let filter_status = args.filter_status.as_deref().unwrap_or("active");
let storage = storage.lock().await;
let intentions = if filter_status == "all" { let intentions = if filter_status == "all" {
// Get all by combining different statuses // Get all by combining different statuses
@ -682,14 +676,14 @@ mod tests {
} }
/// Create a test storage instance with a temporary database /// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
/// Helper to create an intention and return its ID /// Helper to create an intention and return its ID
async fn create_test_intention(storage: &Arc<Mutex<Storage>>, description: &str) -> String { async fn create_test_intention(storage: &Arc<Storage>, description: &str) -> String {
let args = serde_json::json!({ let args = serde_json::json!({
"action": "set", "action": "set",
"description": description "description": description

View file

@ -5,7 +5,7 @@
use serde::Deserialize; use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use chrono::{DateTime, Utc, Duration}; use chrono::{DateTime, Utc, Duration};
use uuid::Uuid; use uuid::Uuid;
@ -222,7 +222,7 @@ struct ListArgs {
/// Execute set_intention tool /// Execute set_intention tool
pub async fn execute_set( pub async fn execute_set(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: SetIntentionArgs = match args { let args: SetIntentionArgs = match args {
@ -290,7 +290,6 @@ pub async fn execute_set(
source_data: None, source_data: None,
}; };
let mut storage = storage.lock().await;
storage.save_intention(&record).map_err(|e| e.to_string())?; storage.save_intention(&record).map_err(|e| e.to_string())?;
Ok(serde_json::json!({ Ok(serde_json::json!({
@ -305,7 +304,7 @@ pub async fn execute_set(
/// Execute check_intentions tool /// Execute check_intentions tool
pub async fn execute_check( pub async fn execute_check(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: CheckIntentionsArgs = match args { let args: CheckIntentionsArgs = match args {
@ -314,7 +313,6 @@ pub async fn execute_check(
}; };
let now = Utc::now(); let now = Utc::now();
let storage = storage.lock().await;
// Get active intentions // Get active intentions
let intentions = storage.get_active_intentions().map_err(|e| e.to_string())?; let intentions = storage.get_active_intentions().map_err(|e| e.to_string())?;
@ -400,7 +398,7 @@ pub async fn execute_check(
/// Execute complete_intention tool /// Execute complete_intention tool
pub async fn execute_complete( pub async fn execute_complete(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: IntentionIdArgs = match args { let args: IntentionIdArgs = match args {
@ -408,7 +406,6 @@ pub async fn execute_complete(
None => return Err("Missing intention_id".to_string()), None => return Err("Missing intention_id".to_string()),
}; };
let mut storage = storage.lock().await;
let updated = storage.update_intention_status(&args.intention_id, "fulfilled") let updated = storage.update_intention_status(&args.intention_id, "fulfilled")
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
@ -425,7 +422,7 @@ pub async fn execute_complete(
/// Execute snooze_intention tool /// Execute snooze_intention tool
pub async fn execute_snooze( pub async fn execute_snooze(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: SnoozeArgs = match args { let args: SnoozeArgs = match args {
@ -436,7 +433,6 @@ pub async fn execute_snooze(
let minutes = args.minutes.unwrap_or(30); let minutes = args.minutes.unwrap_or(30);
let snooze_until = Utc::now() + Duration::minutes(minutes); let snooze_until = Utc::now() + Duration::minutes(minutes);
let mut storage = storage.lock().await;
let updated = storage.snooze_intention(&args.intention_id, snooze_until) let updated = storage.snooze_intention(&args.intention_id, snooze_until)
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
@ -454,7 +450,7 @@ pub async fn execute_snooze(
/// Execute list_intentions tool /// Execute list_intentions tool
pub async fn execute_list( pub async fn execute_list(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: ListArgs = match args { let args: ListArgs = match args {
@ -463,7 +459,6 @@ pub async fn execute_list(
}; };
let status = args.status.as_deref().unwrap_or("active"); let status = args.status.as_deref().unwrap_or("active");
let storage = storage.lock().await;
let intentions = if status == "all" { let intentions = if status == "all" {
// Get all by combining different statuses // Get all by combining different statuses
@ -522,14 +517,14 @@ mod tests {
use tempfile::TempDir; use tempfile::TempDir;
/// Create a test storage instance with a temporary database /// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
/// Helper to create an intention and return its ID /// Helper to create an intention and return its ID
async fn create_test_intention(storage: &Arc<Mutex<Storage>>, description: &str) -> String { async fn create_test_intention(storage: &Arc<Storage>, description: &str) -> String {
let args = serde_json::json!({ let args = serde_json::json!({
"description": description "description": description
}); });

View file

@ -5,7 +5,6 @@
use serde::Deserialize; use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage; use vestige_core::Storage;
@ -44,7 +43,7 @@ struct KnowledgeArgs {
} }
pub async fn execute_get( pub async fn execute_get(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: KnowledgeArgs = match args { let args: KnowledgeArgs = match args {
@ -55,7 +54,6 @@ pub async fn execute_get(
// Validate UUID // Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?; uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
let storage = storage.lock().await;
let node = storage.get_node(&args.id).map_err(|e| e.to_string())?; let node = storage.get_node(&args.id).map_err(|e| e.to_string())?;
match node { match node {
@ -93,7 +91,7 @@ pub async fn execute_get(
} }
pub async fn execute_delete( pub async fn execute_delete(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: KnowledgeArgs = match args { let args: KnowledgeArgs = match args {
@ -104,7 +102,6 @@ pub async fn execute_delete(
// Validate UUID // Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?; uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
let mut storage = storage.lock().await;
let deleted = storage.delete_node(&args.id).map_err(|e| e.to_string())?; let deleted = storage.delete_node(&args.id).map_err(|e| e.to_string())?;
Ok(serde_json::json!({ Ok(serde_json::json!({

View file

@ -118,12 +118,11 @@ pub fn system_status_schema() -> Value {
/// Returns system health status, full statistics, FSRS preview, /// Returns system health status, full statistics, FSRS preview,
/// cognitive module health, state distribution, and actionable recommendations. /// cognitive module health, state distribution, and actionable recommendations.
pub async fn execute_system_status( pub async fn execute_system_status(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
_args: Option<Value>, _args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let storage_guard = storage.lock().await; let stats = storage.get_stats().map_err(|e| e.to_string())?;
let stats = storage_guard.get_stats().map_err(|e| e.to_string())?;
// === Health assessment === // === Health assessment ===
let status = if stats.total_nodes == 0 { let status = if stats.total_nodes == 0 {
@ -142,7 +141,7 @@ pub async fn execute_system_status(
0.0 0.0
}; };
let embedding_ready = storage_guard.is_embedding_ready(); let embedding_ready = storage.is_embedding_ready();
let mut warnings = Vec::new(); let mut warnings = Vec::new();
if stats.average_retention < 0.5 && stats.total_nodes > 0 { if stats.average_retention < 0.5 && stats.total_nodes > 0 {
@ -176,7 +175,7 @@ pub async fn execute_system_status(
} }
// === State distribution === // === State distribution ===
let nodes = storage_guard.get_all_nodes(500, 0).map_err(|e| e.to_string())?; let nodes = storage.get_all_nodes(500, 0).map_err(|e| e.to_string())?;
let total = nodes.len(); let total = nodes.len();
let (active, dormant, silent, unavailable) = if total > 0 { let (active, dormant, silent, unavailable) = if total > 0 {
let mut a = 0usize; let mut a = 0usize;
@ -246,15 +245,14 @@ pub async fn execute_system_status(
}; };
// === Automation triggers (for conditional dream/backup/gc at session start) === // === Automation triggers (for conditional dream/backup/gc at session start) ===
let last_consolidation = storage_guard.get_last_consolidation().ok().flatten(); let last_consolidation = storage.get_last_consolidation().ok().flatten();
let last_dream = storage_guard.get_last_dream().ok().flatten(); let last_dream = storage.get_last_dream().ok().flatten();
let saves_since_last_dream = match &last_dream { let saves_since_last_dream = match &last_dream {
Some(dt) => storage_guard.count_memories_since(*dt).unwrap_or(0), Some(dt) => storage.count_memories_since(*dt).unwrap_or(0),
None => stats.total_nodes as i64, None => stats.total_nodes as i64,
}; };
let last_backup = Storage::get_last_backup_timestamp(); let last_backup = Storage::get_last_backup_timestamp();
drop(storage_guard);
Ok(serde_json::json!({ Ok(serde_json::json!({
"tool": "system_status", "tool": "system_status",
@ -299,10 +297,9 @@ pub async fn execute_system_status(
/// Health check tool — deprecated in v1.7, use execute_system_status() instead /// Health check tool — deprecated in v1.7, use execute_system_status() instead
#[allow(dead_code)] #[allow(dead_code)]
pub async fn execute_health_check( pub async fn execute_health_check(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
_args: Option<Value>, _args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let storage = storage.lock().await;
let stats = storage.get_stats().map_err(|e| e.to_string())?; let stats = storage.get_stats().map_err(|e| e.to_string())?;
let status = if stats.total_nodes == 0 { let status = if stats.total_nodes == 0 {
@ -369,10 +366,9 @@ pub async fn execute_health_check(
/// Consolidate tool /// Consolidate tool
pub async fn execute_consolidate( pub async fn execute_consolidate(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
_args: Option<Value>, _args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let mut storage = storage.lock().await;
let result = storage.run_consolidation().map_err(|e| e.to_string())?; let result = storage.run_consolidation().map_err(|e| e.to_string())?;
Ok(serde_json::json!({ Ok(serde_json::json!({
@ -392,15 +388,14 @@ pub async fn execute_consolidate(
/// Stats tool — deprecated in v1.7, use execute_system_status() instead /// Stats tool — deprecated in v1.7, use execute_system_status() instead
#[allow(dead_code)] #[allow(dead_code)]
pub async fn execute_stats( pub async fn execute_stats(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
_args: Option<Value>, _args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let storage_guard = storage.lock().await; let stats = storage.get_stats().map_err(|e| e.to_string())?;
let stats = storage_guard.get_stats().map_err(|e| e.to_string())?;
// Compute state distribution from a sample of nodes // Compute state distribution from a sample of nodes
let nodes = storage_guard.get_all_nodes(500, 0).map_err(|e| e.to_string())?; let nodes = storage.get_all_nodes(500, 0).map_err(|e| e.to_string())?;
let total = nodes.len(); let total = nodes.len();
let (active, dormant, silent, unavailable) = if total > 0 { let (active, dormant, silent, unavailable) = if total > 0 {
let mut a = 0usize; let mut a = 0usize;
@ -543,7 +538,6 @@ pub async fn execute_stats(
} else { } else {
None None
}; };
drop(storage_guard);
Ok(serde_json::json!({ Ok(serde_json::json!({
"tool": "stats", "tool": "stats",
@ -573,7 +567,7 @@ pub async fn execute_stats(
/// Backup tool /// Backup tool
pub async fn execute_backup( pub async fn execute_backup(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
_args: Option<Value>, _args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
// Determine backup path // Determine backup path
@ -591,7 +585,6 @@ pub async fn execute_backup(
// Use VACUUM INTO for a consistent backup (handles WAL properly) // Use VACUUM INTO for a consistent backup (handles WAL properly)
{ {
let storage = storage.lock().await;
storage.backup_to(&backup_path) storage.backup_to(&backup_path)
.map_err(|e| format!("Failed to create backup: {}", e))?; .map_err(|e| format!("Failed to create backup: {}", e))?;
} }
@ -619,7 +612,7 @@ struct ExportArgs {
/// Export tool /// Export tool
pub async fn execute_export( pub async fn execute_export(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: ExportArgs = match args { let args: ExportArgs = match args {
@ -650,7 +643,6 @@ pub async fn execute_export(
let tag_filter: Vec<String> = args.tags.unwrap_or_default(); let tag_filter: Vec<String> = args.tags.unwrap_or_default();
// Fetch all nodes (capped at 100K to prevent OOM) // Fetch all nodes (capped at 100K to prevent OOM)
let storage = storage.lock().await;
let mut all_nodes = Vec::new(); let mut all_nodes = Vec::new();
let page_size = 500; let page_size = 500;
let max_nodes = 100_000; let max_nodes = 100_000;
@ -755,7 +747,7 @@ struct GcArgs {
/// Garbage collection tool /// Garbage collection tool
pub async fn execute_gc( pub async fn execute_gc(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: GcArgs = match args { let args: GcArgs = match args {
@ -771,7 +763,6 @@ pub async fn execute_gc(
let max_age_days = args.max_age_days; let max_age_days = args.max_age_days;
let dry_run = args.dry_run.unwrap_or(true); // Default to dry_run for safety let dry_run = args.dry_run.unwrap_or(true); // Default to dry_run for safety
let mut storage = storage.lock().await;
let now = Utc::now(); let now = Utc::now();
// Fetch all nodes (capped at 100K to prevent OOM) // Fetch all nodes (capped at 100K to prevent OOM)
@ -883,10 +874,10 @@ mod tests {
Arc::new(Mutex::new(CognitiveEngine::new())) Arc::new(Mutex::new(CognitiveEngine::new()))
} }
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
#[test] #[test]
@ -912,8 +903,7 @@ mod tests {
async fn test_system_status_with_memories() { async fn test_system_status_with_memories() {
let (storage, _dir) = test_storage().await; let (storage, _dir) = test_storage().await;
{ {
let mut s = storage.lock().await; storage.ingest(vestige_core::IngestInput {
s.ingest(vestige_core::IngestInput {
content: "Test memory for status".to_string(), content: "Test memory for status".to_string(),
node_type: "fact".to_string(), node_type: "fact".to_string(),
source: None, source: None,
@ -961,9 +951,8 @@ mod tests {
async fn test_system_status_automation_triggers_with_memories() { async fn test_system_status_automation_triggers_with_memories() {
let (storage, _dir) = test_storage().await; let (storage, _dir) = test_storage().await;
{ {
let mut s = storage.lock().await;
for i in 0..3 { for i in 0..3 {
s.ingest(vestige_core::IngestInput { storage.ingest(vestige_core::IngestInput {
content: format!("Automation trigger test memory {}", i), content: format!("Automation trigger test memory {}", i),
node_type: "fact".to_string(), node_type: "fact".to_string(),
source: None, source: None,

View file

@ -5,7 +5,6 @@
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{MemoryState, Storage}; use vestige_core::{MemoryState, Storage};
@ -77,7 +76,7 @@ pub fn stats_schema() -> Value {
/// Get the cognitive state of a specific memory /// Get the cognitive state of a specific memory
pub async fn execute_get( pub async fn execute_get(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args = args.ok_or("Missing arguments")?; let args = args.ok_or("Missing arguments")?;
@ -86,7 +85,6 @@ pub async fn execute_get(
.as_str() .as_str()
.ok_or("memory_id is required")?; .ok_or("memory_id is required")?;
let storage = storage.lock().await;
// Get the memory // Get the memory
let memory = storage.get_node(memory_id) let memory = storage.get_node(memory_id)
@ -131,7 +129,7 @@ pub async fn execute_get(
/// List memories by state /// List memories by state
pub async fn execute_list( pub async fn execute_list(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args = args.unwrap_or(serde_json::json!({})); let args = args.unwrap_or(serde_json::json!({}));
@ -139,7 +137,6 @@ pub async fn execute_list(
let state_filter = args["state"].as_str(); let state_filter = args["state"].as_str();
let limit = args["limit"].as_i64().unwrap_or(20) as usize; let limit = args["limit"].as_i64().unwrap_or(20) as usize;
let storage = storage.lock().await;
// Get all memories // Get all memories
let memories = storage.get_all_nodes(500, 0) let memories = storage.get_all_nodes(500, 0)
@ -210,9 +207,8 @@ pub async fn execute_list(
/// Get memory state statistics /// Get memory state statistics
pub async fn execute_stats( pub async fn execute_stats(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let storage = storage.lock().await;
let memories = storage.get_all_nodes(1000, 0) let memories = storage.get_all_nodes(1000, 0)
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;

View file

@ -69,7 +69,7 @@ struct MemoryArgs {
/// Execute the unified memory tool /// Execute the unified memory tool
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -95,8 +95,7 @@ pub async fn execute(
} }
/// Get full memory node with all metadata /// Get full memory node with all metadata
async fn execute_get(storage: &Arc<Mutex<Storage>>, id: &str) -> Result<Value, String> { async fn execute_get(storage: &Arc<Storage>, id: &str) -> Result<Value, String> {
let storage = storage.lock().await;
let node = storage.get_node(id).map_err(|e| e.to_string())?; let node = storage.get_node(id).map_err(|e| e.to_string())?;
match node { match node {
@ -136,8 +135,7 @@ async fn execute_get(storage: &Arc<Mutex<Storage>>, id: &str) -> Result<Value, S
} }
/// Delete a memory and return success status /// Delete a memory and return success status
async fn execute_delete(storage: &Arc<Mutex<Storage>>, id: &str) -> Result<Value, String> { async fn execute_delete(storage: &Arc<Storage>, id: &str) -> Result<Value, String> {
let mut storage = storage.lock().await;
let deleted = storage.delete_node(id).map_err(|e| e.to_string())?; let deleted = storage.delete_node(id).map_err(|e| e.to_string())?;
Ok(serde_json::json!({ Ok(serde_json::json!({
@ -149,8 +147,7 @@ async fn execute_delete(storage: &Arc<Mutex<Storage>>, id: &str) -> Result<Value
} }
/// Get accessibility state of a memory (Active/Dormant/Silent/Unavailable) /// Get accessibility state of a memory (Active/Dormant/Silent/Unavailable)
async fn execute_state(storage: &Arc<Mutex<Storage>>, id: &str) -> Result<Value, String> { async fn execute_state(storage: &Arc<Storage>, id: &str) -> Result<Value, String> {
let storage = storage.lock().await;
// Get the memory // Get the memory
let memory = storage let memory = storage
@ -197,18 +194,16 @@ async fn execute_state(storage: &Arc<Mutex<Storage>>, id: &str) -> Result<Value,
/// Promote a memory (thumbs up) — increases retrieval strength with cognitive feedback pipeline /// Promote a memory (thumbs up) — increases retrieval strength with cognitive feedback pipeline
async fn execute_promote( async fn execute_promote(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
id: &str, id: &str,
reason: Option<String>, reason: Option<String>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let storage_guard = storage.lock().await;
let before = storage_guard.get_node(id).map_err(|e| e.to_string())? let before = storage.get_node(id).map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", id))?; .ok_or_else(|| format!("Node not found: {}", id))?;
let node = storage_guard.promote_memory(id).map_err(|e| e.to_string())?; let node = storage.promote_memory(id).map_err(|e| e.to_string())?;
drop(storage_guard);
// Cognitive feedback pipeline // Cognitive feedback pipeline
if let Ok(mut cog) = cognitive.try_lock() { if let Ok(mut cog) = cognitive.try_lock() {
@ -254,18 +249,16 @@ async fn execute_promote(
/// Demote a memory (thumbs down) — decreases retrieval strength with cognitive feedback pipeline /// Demote a memory (thumbs down) — decreases retrieval strength with cognitive feedback pipeline
async fn execute_demote( async fn execute_demote(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
id: &str, id: &str,
reason: Option<String>, reason: Option<String>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let storage_guard = storage.lock().await;
let before = storage_guard.get_node(id).map_err(|e| e.to_string())? let before = storage.get_node(id).map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", id))?; .ok_or_else(|| format!("Node not found: {}", id))?;
let node = storage_guard.demote_memory(id).map_err(|e| e.to_string())?; let node = storage.demote_memory(id).map_err(|e| e.to_string())?;
drop(storage_guard);
// Cognitive feedback pipeline // Cognitive feedback pipeline
if let Ok(mut cog) = cognitive.try_lock() { if let Ok(mut cog) = cognitive.try_lock() {
@ -356,15 +349,14 @@ mod tests {
Arc::new(Mutex::new(CognitiveEngine::new())) Arc::new(Mutex::new(CognitiveEngine::new()))
} }
async fn test_storage() -> (Arc<Mutex<Storage>>, tempfile::TempDir) { async fn test_storage() -> (Arc<Storage>, tempfile::TempDir) {
let dir = tempfile::TempDir::new().unwrap(); let dir = tempfile::TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
async fn ingest_memory(storage: &Arc<Mutex<Storage>>) -> String { async fn ingest_memory(storage: &Arc<Storage>) -> String {
let mut s = storage.lock().await; let node = storage
let node = s
.ingest(vestige_core::IngestInput { .ingest(vestige_core::IngestInput {
content: "Memory unified test content".to_string(), content: "Memory unified test content".to_string(),
node_type: "fact".to_string(), node_type: "fact".to_string(),

View file

@ -30,6 +30,13 @@ pub mod explore;
pub mod predict; pub mod predict;
pub mod restore; pub mod restore;
// v1.8: Context Packets
pub mod session_context;
// v1.9: Autonomic tools
pub mod health;
pub mod graph;
// Deprecated tools - kept for internal backwards compatibility // Deprecated tools - kept for internal backwards compatibility
// These modules are intentionally unused in the public API // These modules are intentionally unused in the public API
#[allow(dead_code)] #[allow(dead_code)]

View file

@ -29,7 +29,7 @@ pub fn schema() -> serde_json::Value {
} }
pub async fn execute( pub async fn execute(
_storage: &Arc<Mutex<Storage>>, _storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<serde_json::Value>, args: Option<serde_json::Value>,
) -> Result<serde_json::Value, String> { ) -> Result<serde_json::Value, String> {
@ -127,10 +127,10 @@ mod tests {
Arc::new(Mutex::new(CognitiveEngine::new())) Arc::new(Mutex::new(CognitiveEngine::new()))
} }
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
#[test] #[test]

View file

@ -5,7 +5,6 @@
use serde::Deserialize; use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{RecallInput, SearchMode, Storage}; use vestige_core::{RecallInput, SearchMode, Storage};
@ -46,7 +45,7 @@ struct RecallArgs {
} }
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: RecallArgs = match args { let args: RecallArgs = match args {
@ -66,7 +65,6 @@ pub async fn execute(
valid_at: None, valid_at: None,
}; };
let storage = storage.lock().await;
let nodes = storage.recall(input).map_err(|e| e.to_string())?; let nodes = storage.recall(input).map_err(|e| e.to_string())?;
let results: Vec<Value> = nodes let results: Vec<Value> = nodes
@ -107,14 +105,14 @@ mod tests {
use tempfile::TempDir; use tempfile::TempDir;
/// Create a test storage instance with a temporary database /// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
/// Helper to ingest test content /// Helper to ingest test content
async fn ingest_test_content(storage: &Arc<Mutex<Storage>>, content: &str) -> String { async fn ingest_test_content(storage: &Arc<Storage>, content: &str) -> String {
let input = IngestInput { let input = IngestInput {
content: content.to_string(), content: content.to_string(),
node_type: "fact".to_string(), node_type: "fact".to_string(),
@ -125,8 +123,7 @@ mod tests {
valid_from: None, valid_from: None,
valid_until: None, valid_until: None,
}; };
let mut storage_lock = storage.lock().await; let node = storage.ingest(input).unwrap();
let node = storage_lock.ingest(input).unwrap();
node.id node.id
} }

View file

@ -7,7 +7,7 @@
use serde::Deserialize; use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{IngestInput, Storage}; use vestige_core::{IngestInput, Storage};
@ -52,7 +52,7 @@ struct MemoryBackup {
} }
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: RestoreArgs = match args { let args: RestoreArgs = match args {
@ -102,7 +102,6 @@ pub async fn execute(
})); }));
} }
let mut storage_guard = storage.lock().await;
let mut success_count = 0_usize; let mut success_count = 0_usize;
let mut error_count = 0_usize; let mut error_count = 0_usize;
@ -118,7 +117,7 @@ pub async fn execute(
valid_until: None, valid_until: None,
}; };
match storage_guard.ingest(input) { match storage.ingest(input) {
Ok(_) => success_count += 1, Ok(_) => success_count += 1,
Err(_) => error_count += 1, Err(_) => error_count += 1,
} }
@ -140,10 +139,10 @@ mod tests {
use std::io::Write; use std::io::Write;
use tempfile::TempDir; use tempfile::TempDir;
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
fn write_temp_file(dir: &TempDir, name: &str, content: &str) -> String { fn write_temp_file(dir: &TempDir, name: &str, content: &str) -> String {

View file

@ -5,7 +5,7 @@
use serde::Deserialize; use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{Rating, Storage}; use vestige_core::{Rating, Storage};
@ -38,7 +38,7 @@ struct ReviewArgs {
} }
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: ReviewArgs = match args { let args: ReviewArgs = match args {
@ -57,7 +57,6 @@ pub async fn execute(
let rating = Rating::from_i32(rating_value) let rating = Rating::from_i32(rating_value)
.ok_or_else(|| "Invalid rating value".to_string())?; .ok_or_else(|| "Invalid rating value".to_string())?;
let mut storage = storage.lock().await;
// Get node before review for comparison // Get node before review for comparison
let before = storage.get_node(&args.id).map_err(|e| e.to_string())? let before = storage.get_node(&args.id).map_err(|e| e.to_string())?
@ -102,14 +101,14 @@ mod tests {
use tempfile::TempDir; use tempfile::TempDir;
/// Create a test storage instance with a temporary database /// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
/// Helper to ingest test content and return node ID /// Helper to ingest test content and return node ID
async fn ingest_test_content(storage: &Arc<Mutex<Storage>>, content: &str) -> String { async fn ingest_test_content(storage: &Arc<Storage>, content: &str) -> String {
let input = IngestInput { let input = IngestInput {
content: content.to_string(), content: content.to_string(),
node_type: "fact".to_string(), node_type: "fact".to_string(),
@ -120,8 +119,7 @@ mod tests {
valid_from: None, valid_from: None,
valid_until: None, valid_until: None,
}; };
let mut storage_lock = storage.lock().await; let node = storage.ingest(input).unwrap();
let node = storage_lock.ingest(input).unwrap();
node.id node.id
} }

View file

@ -5,7 +5,6 @@
use serde::Deserialize; use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage; use vestige_core::Storage;
@ -90,7 +89,7 @@ struct HybridSearchArgs {
} }
pub async fn execute_semantic( pub async fn execute_semantic(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: SemanticSearchArgs = match args { let args: SemanticSearchArgs = match args {
@ -102,7 +101,6 @@ pub async fn execute_semantic(
return Err("Query cannot be empty".to_string()); return Err("Query cannot be empty".to_string());
} }
let storage = storage.lock().await;
// Check if embeddings are ready // Check if embeddings are ready
if !storage.is_embedding_ready() { if !storage.is_embedding_ready() {
@ -143,7 +141,7 @@ pub async fn execute_semantic(
} }
pub async fn execute_hybrid( pub async fn execute_hybrid(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: HybridSearchArgs = match args { let args: HybridSearchArgs = match args {
@ -155,7 +153,6 @@ pub async fn execute_hybrid(
return Err("Query cannot be empty".to_string()); return Err("Query cannot be empty".to_string());
} }
let storage = storage.lock().await;
let results = storage let results = storage
.hybrid_search( .hybrid_search(

View file

@ -65,6 +65,12 @@ pub fn schema() -> Value {
"type": "array", "type": "array",
"items": { "type": "string" }, "items": { "type": "string" },
"description": "Optional topics for context-dependent retrieval boosting" "description": "Optional topics for context-dependent retrieval boosting"
},
"token_budget": {
"type": "integer",
"description": "Max tokens for response. Server truncates content to fit budget. Use memory(action='get') for full content of specific IDs.",
"minimum": 100,
"maximum": 10000
} }
}, },
"required": ["query"] "required": ["query"]
@ -81,6 +87,8 @@ struct SearchArgs {
#[serde(alias = "detail_level")] #[serde(alias = "detail_level")]
detail_level: Option<String>, detail_level: Option<String>,
context_topics: Option<Vec<String>>, context_topics: Option<Vec<String>>,
#[serde(alias = "token_budget")]
token_budget: Option<i32>,
} }
/// Execute unified search with 7-stage cognitive pipeline. /// Execute unified search with 7-stage cognitive pipeline.
@ -96,7 +104,7 @@ struct SearchArgs {
/// ///
/// Also applies Testing Effect (Roediger & Karpicke 2006) by auto-strengthening on access. /// Also applies Testing Effect (Roediger & Karpicke 2006) by auto-strengthening on access.
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -135,9 +143,8 @@ pub async fn execute(
// STAGE 1: Hybrid search with 3x over-fetch for reranking pool // STAGE 1: Hybrid search with 3x over-fetch for reranking pool
// ==================================================================== // ====================================================================
let overfetch_limit = (limit * 3).min(100); // Cap at 100 to avoid excessive DB load let overfetch_limit = (limit * 3).min(100); // Cap at 100 to avoid excessive DB load
let storage_guard = storage.lock().await;
let results = storage_guard let results = storage
.hybrid_search(&args.query, overfetch_limit, keyword_weight, semantic_weight) .hybrid_search(&args.query, overfetch_limit, keyword_weight, semantic_weight)
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;
@ -327,10 +334,9 @@ pub async fn execute(
// Auto-strengthen on access (Testing Effect) // Auto-strengthen on access (Testing Effect)
// ==================================================================== // ====================================================================
let ids: Vec<&str> = filtered_results.iter().map(|r| r.node.id.as_str()).collect(); let ids: Vec<&str> = filtered_results.iter().map(|r| r.node.id.as_str()).collect();
let _ = storage_guard.strengthen_batch_on_access(&ids); let _ = storage.strengthen_batch_on_access(&ids);
// Drop storage lock before acquiring cognitive for side effects // Drop storage lock before acquiring cognitive for side effects
drop(storage_guard);
// ==================================================================== // ====================================================================
// STAGE 7: Side effects — predictive memory + reconsolidation // STAGE 7: Side effects — predictive memory + reconsolidation
@ -371,11 +377,38 @@ pub async fn execute(
// ==================================================================== // ====================================================================
// Format and return // Format and return
// ==================================================================== // ====================================================================
let formatted: Vec<Value> = filtered_results let mut formatted: Vec<Value> = filtered_results
.iter() .iter()
.map(|r| format_search_result(r, detail_level)) .map(|r| format_search_result(r, detail_level))
.collect(); .collect();
// ====================================================================
// Token budget enforcement (v1.8.0)
// ====================================================================
let mut budget_expandable: Vec<String> = Vec::new();
let mut budget_tokens_used: Option<usize> = None;
if let Some(budget) = args.token_budget {
let budget = budget.clamp(100, 10000) as usize;
let budget_chars = budget * 4;
let mut used = 0;
let mut budgeted = Vec::new();
for result in &formatted {
let size = serde_json::to_string(result).unwrap_or_default().len();
if used + size > budget_chars {
if let Some(id) = result.get("id").and_then(|v| v.as_str()) {
budget_expandable.push(id.to_string());
}
continue;
}
used += size;
budgeted.push(result.clone());
}
budget_tokens_used = Some(used / 4);
formatted = budgeted;
}
// Check learning mode via attention signal // Check learning mode via attention signal
let learning_mode = cognitive.try_lock().ok().map(|cog| cog.attention_signal.is_learning_mode()).unwrap_or(false); let learning_mode = cognitive.try_lock().ok().map(|cog| cog.attention_signal.is_learning_mode()).unwrap_or(false);
@ -403,6 +436,16 @@ pub async fn execute(
if learning_mode { if learning_mode {
response["learningModeDetected"] = serde_json::json!(true); response["learningModeDetected"] = serde_json::json!(true);
} }
// Include token budget info (v1.8.0)
if !budget_expandable.is_empty() {
response["expandable"] = serde_json::json!(budget_expandable);
}
if let Some(budget) = args.token_budget {
response["tokenBudget"] = serde_json::json!(budget);
}
if let Some(used) = budget_tokens_used {
response["tokensUsed"] = serde_json::json!(used);
}
Ok(response) Ok(response)
} }
@ -516,14 +559,14 @@ mod tests {
} }
/// Create a test storage instance with a temporary database /// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
/// Helper to ingest test content /// Helper to ingest test content
async fn ingest_test_content(storage: &Arc<Mutex<Storage>>, content: &str) -> String { async fn ingest_test_content(storage: &Arc<Storage>, content: &str) -> String {
let input = IngestInput { let input = IngestInput {
content: content.to_string(), content: content.to_string(),
node_type: "fact".to_string(), node_type: "fact".to_string(),
@ -534,8 +577,7 @@ mod tests {
valid_from: None, valid_from: None,
valid_until: None, valid_until: None,
}; };
let mut storage_lock = storage.lock().await; let node = storage.ingest(input).unwrap();
let node = storage_lock.ingest(input).unwrap();
node.id node.id
} }
@ -967,4 +1009,90 @@ mod tests {
assert!(result.is_err()); assert!(result.is_err());
assert!(result.unwrap_err().contains("Invalid detail_level")); assert!(result.unwrap_err().contains("Invalid detail_level"));
} }
// ========================================================================
// TOKEN BUDGET TESTS (v1.8.0)
// ========================================================================
#[tokio::test]
async fn test_token_budget_limits_results() {
let (storage, _dir) = test_storage().await;
for i in 0..10 {
ingest_test_content(
&storage,
&format!("Budget test content number {} with some extra text to increase size.", i),
)
.await;
}
// Small budget should reduce results
let args = serde_json::json!({
"query": "budget test",
"token_budget": 200,
"min_similarity": 0.0
});
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert!(value["tokenBudget"].as_i64().unwrap() == 200);
assert!(value["tokensUsed"].is_number());
}
#[tokio::test]
async fn test_token_budget_expandable() {
let (storage, _dir) = test_storage().await;
for i in 0..15 {
ingest_test_content(
&storage,
&format!(
"Expandable budget test number {} with quite a bit of content to ensure we exceed the token budget allocation threshold.",
i
),
)
.await;
}
let args = serde_json::json!({
"query": "expandable budget test",
"token_budget": 150,
"min_similarity": 0.0
});
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
// expandable field should exist if results were dropped
if let Some(expandable) = value.get("expandable") {
assert!(expandable.is_array());
}
}
#[tokio::test]
async fn test_no_budget_unchanged() {
let (storage, _dir) = test_storage().await;
ingest_test_content(&storage, "No budget test content.").await;
let args = serde_json::json!({
"query": "no budget",
"min_similarity": 0.0
});
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
// No budget fields should be present
assert!(value.get("tokenBudget").is_none());
assert!(value.get("tokensUsed").is_none());
assert!(value.get("expandable").is_none());
}
#[test]
fn test_schema_has_token_budget() {
let schema_value = schema();
let tb = &schema_value["properties"]["token_budget"];
assert!(tb.is_object());
assert_eq!(tb["minimum"], 100);
assert_eq!(tb["maximum"], 10000);
}
} }

View file

@ -0,0 +1,718 @@
//! Session Context Tool — One-call session initialization (v1.8.0)
//!
//! Combines search, intentions, status, predictions, and codebase context
//! into a single token-budgeted response. Replaces 5 separate calls at
//! session start (~15K tokens → ~500-1000 tokens).
use std::collections::HashSet;
use std::sync::Arc;
use tokio::sync::Mutex;
use chrono::{DateTime, Duration, Utc};
use serde::Deserialize;
use serde_json::Value;
use crate::cognitive::CognitiveEngine;
use vestige_core::Storage;
/// Input schema for session_context tool
pub fn schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"queries": {
"type": "array",
"items": { "type": "string" },
"description": "Search queries to run (default: [\"user preferences\"])"
},
"token_budget": {
"type": "integer",
"description": "Max tokens for response (default: 1000). Server truncates content to fit budget.",
"default": 1000,
"minimum": 100,
"maximum": 10000
},
"context": {
"type": "object",
"description": "Current context for intention matching and predictions",
"properties": {
"codebase": { "type": "string" },
"topics": {
"type": "array",
"items": { "type": "string" }
},
"file": { "type": "string" }
}
},
"include_status": {
"type": "boolean",
"description": "Include system health info (default: true)",
"default": true
},
"include_intentions": {
"type": "boolean",
"description": "Include triggered intentions (default: true)",
"default": true
},
"include_predictions": {
"type": "boolean",
"description": "Include memory predictions (default: true)",
"default": true
}
}
})
}
#[derive(Debug, Deserialize, Default)]
struct SessionContextArgs {
queries: Option<Vec<String>>,
token_budget: Option<i32>,
context: Option<ContextSpec>,
include_status: Option<bool>,
include_intentions: Option<bool>,
include_predictions: Option<bool>,
}
#[derive(Debug, Deserialize, Default)]
struct ContextSpec {
codebase: Option<String>,
topics: Option<Vec<String>>,
file: Option<String>,
}
/// Extract the first sentence or first line from content, capped at 150 chars.
fn first_sentence(content: &str) -> String {
let content = content.trim();
let end = content
.find(". ")
.map(|i| i + 1)
.or_else(|| content.find('\n'))
.unwrap_or(content.len())
.min(150);
// UTF-8 safe boundary
let end = content.floor_char_boundary(end);
content[..end].to_string()
}
/// Execute session_context tool — one-call session initialization.
pub async fn execute(
storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: SessionContextArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => SessionContextArgs::default(),
};
let token_budget = args.token_budget.unwrap_or(1000).clamp(100, 10000) as usize;
let budget_chars = token_budget * 4;
let include_status = args.include_status.unwrap_or(true);
let include_intentions = args.include_intentions.unwrap_or(true);
let include_predictions = args.include_predictions.unwrap_or(true);
let queries = args.queries.unwrap_or_else(|| vec!["user preferences".to_string()]);
let mut context_parts: Vec<String> = Vec::new();
let mut expandable_ids: Vec<String> = Vec::new();
let mut char_count = 0;
// ====================================================================
// 1. Search queries — extract first sentence per result, dedup by ID
// ====================================================================
let mut seen_ids = HashSet::new();
let mut memory_lines: Vec<String> = Vec::new();
for query in &queries {
let results = storage
.hybrid_search(query, 5, 0.3, 0.7)
.map_err(|e| e.to_string())?;
for r in results {
if seen_ids.contains(&r.node.id) {
continue;
}
let summary = first_sentence(&r.node.content);
let line = format!("- {}", summary);
let line_len = line.len() + 1; // +1 for newline
if char_count + line_len > budget_chars {
expandable_ids.push(r.node.id.clone());
} else {
memory_lines.push(line);
char_count += line_len;
}
seen_ids.insert(r.node.id.clone());
}
}
// Auto-strengthen accessed memories (Testing Effect)
let accessed_ids: Vec<&str> = seen_ids.iter().map(|s| s.as_str()).collect();
let _ = storage.strengthen_batch_on_access(&accessed_ids);
if !memory_lines.is_empty() {
context_parts.push(format!("**Memories:**\n{}", memory_lines.join("\n")));
}
// ====================================================================
// 2. Intentions — find triggered + pending high-priority
// ====================================================================
if include_intentions {
let intentions = storage.get_active_intentions().map_err(|e| e.to_string())?;
let now = Utc::now();
let mut triggered_lines: Vec<String> = Vec::new();
for intention in &intentions {
let is_overdue = intention.deadline.map(|d| d < now).unwrap_or(false);
// Check context-based triggers
let is_context_triggered = if let Some(ctx) = &args.context {
check_intention_triggered(intention, ctx, now)
} else {
false
};
if is_overdue || is_context_triggered || intention.priority >= 3 {
let priority_str = match intention.priority {
4 => " (critical)",
3 => " (high)",
_ => "",
};
let deadline_str = intention
.deadline
.map(|d| format!(" [due {}]", d.format("%b %d")))
.unwrap_or_default();
let line = format!(
"- {}{}{}",
first_sentence(&intention.content),
priority_str,
deadline_str
);
let line_len = line.len() + 1;
if char_count + line_len <= budget_chars {
triggered_lines.push(line);
char_count += line_len;
}
}
}
if !triggered_lines.is_empty() {
context_parts.push(format!("**Triggered:**\n{}", triggered_lines.join("\n")));
}
}
// ====================================================================
// 3. System status — compact one-liner
// ====================================================================
let stats = storage.get_stats().map_err(|e| e.to_string())?;
let status = if stats.total_nodes == 0 {
"empty"
} else if stats.average_retention < 0.3 {
"critical"
} else if stats.average_retention < 0.5 {
"degraded"
} else {
"healthy"
};
// Automation triggers
let last_dream = storage.get_last_dream().ok().flatten();
let saves_since_last_dream = match &last_dream {
Some(dt) => storage.count_memories_since(*dt).unwrap_or(0),
None => stats.total_nodes as i64,
};
let last_backup = Storage::get_last_backup_timestamp();
let now = Utc::now();
let needs_dream = last_dream
.map(|dt| now - dt > Duration::hours(24) || saves_since_last_dream > 50)
.unwrap_or(true);
let needs_backup = last_backup
.map(|dt| now - dt > Duration::days(7))
.unwrap_or(true);
let needs_gc = status == "degraded" || status == "critical";
if include_status {
let embedding_pct = if stats.total_nodes > 0 {
(stats.nodes_with_embeddings as f64 / stats.total_nodes as f64) * 100.0
} else {
0.0
};
let status_line = format!(
"**Status:** {} memories | {} | {:.0}% embeddings",
stats.total_nodes, status, embedding_pct
);
let status_len = status_line.len() + 1;
if char_count + status_len <= budget_chars {
context_parts.push(status_line);
char_count += status_len;
}
// Needs line (only if any automation needed)
let mut needs: Vec<&str> = Vec::new();
if needs_dream {
needs.push("dream");
}
if needs_backup {
needs.push("backup");
}
if needs_gc {
needs.push("gc");
}
if !needs.is_empty() {
let needs_line = format!("**Needs:** {}", needs.join(", "));
let needs_len = needs_line.len() + 1;
if char_count + needs_len <= budget_chars {
context_parts.push(needs_line);
char_count += needs_len;
}
}
}
// ====================================================================
// 4. Predictions — top 3 with content preview
// ====================================================================
if include_predictions {
let cog = cognitive.lock().await;
let session_ctx = vestige_core::neuroscience::predictive_retrieval::SessionContext {
started_at: Utc::now(),
current_focus: args
.context
.as_ref()
.and_then(|c| c.topics.as_ref())
.and_then(|t| t.first())
.cloned(),
active_files: args
.context
.as_ref()
.and_then(|c| c.file.as_ref())
.map(|f| vec![f.clone()])
.unwrap_or_default(),
accessed_memories: Vec::new(),
recent_queries: Vec::new(),
detected_intent: None,
project_context: args
.context
.as_ref()
.and_then(|c| c.codebase.as_ref())
.map(|name| vestige_core::neuroscience::predictive_retrieval::ProjectContext {
name: name.to_string(),
path: String::new(),
technologies: Vec::new(),
primary_language: None,
}),
};
let predictions = cog
.predictive_memory
.predict_needed_memories(&session_ctx)
.unwrap_or_default();
if !predictions.is_empty() {
let pred_lines: Vec<String> = predictions
.iter()
.take(3)
.map(|p| {
format!(
"- {} ({:.0}%)",
first_sentence(&p.content_preview),
p.confidence * 100.0
)
})
.collect();
let pred_section = format!("**Predicted:**\n{}", pred_lines.join("\n"));
let pred_len = pred_section.len() + 1;
if char_count + pred_len <= budget_chars {
context_parts.push(pred_section);
char_count += pred_len;
}
}
}
// ====================================================================
// 5. Codebase patterns/decisions (if codebase specified)
// ====================================================================
if let Some(ref ctx) = args.context {
if let Some(ref codebase) = ctx.codebase {
let codebase_tag = format!("codebase:{}", codebase);
let mut cb_lines: Vec<String> = Vec::new();
// Get patterns
if let Ok(patterns) = storage.get_nodes_by_type_and_tag("pattern", Some(&codebase_tag), 3) {
for p in &patterns {
let line = format!("- [pattern] {}", first_sentence(&p.content));
let line_len = line.len() + 1;
if char_count + line_len <= budget_chars {
cb_lines.push(line);
char_count += line_len;
}
}
}
// Get decisions
if let Ok(decisions) =
storage.get_nodes_by_type_and_tag("decision", Some(&codebase_tag), 3)
{
for d in &decisions {
let line = format!("- [decision] {}", first_sentence(&d.content));
let line_len = line.len() + 1;
if char_count + line_len <= budget_chars {
cb_lines.push(line);
char_count += line_len;
}
}
}
if !cb_lines.is_empty() {
context_parts.push(format!("**Codebase ({}):**\n{}", codebase, cb_lines.join("\n")));
}
}
}
// ====================================================================
// 6. Assemble final response
// ====================================================================
let header = format!("## Session ({} memories, {})\n", stats.total_nodes, status);
let context_text = format!("{}{}", header, context_parts.join("\n\n"));
let tokens_used = context_text.len() / 4;
Ok(serde_json::json!({
"context": context_text,
"tokensUsed": tokens_used,
"tokenBudget": token_budget,
"expandable": expandable_ids,
"automationTriggers": {
"needsDream": needs_dream,
"needsBackup": needs_backup,
"needsGc": needs_gc,
},
}))
}
/// Check if an intention should be triggered based on the current context.
fn check_intention_triggered(
intention: &vestige_core::IntentionRecord,
ctx: &ContextSpec,
now: DateTime<Utc>,
) -> bool {
// Parse trigger data
let trigger: Option<TriggerData> = serde_json::from_str(&intention.trigger_data).ok();
let Some(trigger) = trigger else {
return false;
};
match trigger.trigger_type.as_deref() {
Some("time") => {
if let Some(ref at) = trigger.at {
if let Ok(trigger_time) = DateTime::parse_from_rfc3339(at) {
return trigger_time.with_timezone(&Utc) <= now;
}
}
if let Some(mins) = trigger.in_minutes {
let trigger_time = intention.created_at + Duration::minutes(mins);
return trigger_time <= now;
}
false
}
Some("context") => {
// Check codebase match
if let (Some(trigger_cb), Some(current_cb)) = (&trigger.codebase, &ctx.codebase)
{
if current_cb
.to_lowercase()
.contains(&trigger_cb.to_lowercase())
{
return true;
}
}
// Check file pattern match
if let (Some(pattern), Some(file)) = (&trigger.file_pattern, &ctx.file) {
if file.contains(pattern.as_str()) {
return true;
}
}
// Check topic match
if let (Some(topic), Some(topics)) = (&trigger.topic, &ctx.topics) {
if topics
.iter()
.any(|t| t.to_lowercase().contains(&topic.to_lowercase()))
{
return true;
}
}
false
}
_ => false,
}
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct TriggerData {
#[serde(rename = "type")]
trigger_type: Option<String>,
at: Option<String>,
in_minutes: Option<i64>,
codebase: Option<String>,
file_pattern: Option<String>,
topic: Option<String>,
}
// ============================================================================
// TESTS
// ============================================================================
#[cfg(test)]
mod tests {
use super::*;
use crate::cognitive::CognitiveEngine;
use tempfile::TempDir;
use vestige_core::IngestInput;
fn test_cognitive() -> Arc<Mutex<CognitiveEngine>> {
Arc::new(Mutex::new(CognitiveEngine::new()))
}
async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(storage), dir)
}
async fn ingest_test_content(storage: &Arc<Storage>, content: &str, tags: Vec<&str>) -> String {
let input = IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: tags.into_iter().map(|s| s.to_string()).collect(),
valid_from: None,
valid_until: None,
};
let node = storage.ingest(input).unwrap();
node.id
}
// ========================================================================
// SCHEMA TESTS
// ========================================================================
#[test]
fn test_schema_has_properties() {
let s = schema();
assert_eq!(s["type"], "object");
assert!(s["properties"]["queries"].is_object());
assert!(s["properties"]["token_budget"].is_object());
assert!(s["properties"]["context"].is_object());
assert!(s["properties"]["include_status"].is_object());
assert!(s["properties"]["include_intentions"].is_object());
assert!(s["properties"]["include_predictions"].is_object());
}
#[test]
fn test_schema_token_budget_bounds() {
let s = schema();
let tb = &s["properties"]["token_budget"];
assert_eq!(tb["minimum"], 100);
assert_eq!(tb["maximum"], 10000);
assert_eq!(tb["default"], 1000);
}
// ========================================================================
// EXECUTE TESTS
// ========================================================================
#[tokio::test]
async fn test_default_no_args() {
let (storage, _dir) = test_storage().await;
let result = execute(&storage, &test_cognitive(), None).await;
assert!(result.is_ok());
let value = result.unwrap();
assert!(value["context"].is_string());
assert!(value["tokensUsed"].is_number());
assert!(value["tokenBudget"].is_number());
assert_eq!(value["tokenBudget"], 1000);
assert!(value["expandable"].is_array());
assert!(value["automationTriggers"].is_object());
}
#[tokio::test]
async fn test_with_queries() {
let (storage, _dir) = test_storage().await;
ingest_test_content(&storage, "Sam prefers Rust and TypeScript for all projects.", vec![]).await;
let args = serde_json::json!({
"queries": ["Sam preferences", "project context"]
});
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
let ctx = value["context"].as_str().unwrap();
assert!(ctx.contains("Session"));
}
#[tokio::test]
async fn test_token_budget_respected() {
let (storage, _dir) = test_storage().await;
// Ingest several memories to generate content
for i in 0..20 {
ingest_test_content(
&storage,
&format!(
"Memory number {} contains detailed information about topic {} that is quite long and verbose to fill up the token budget.",
i, i
),
vec![],
)
.await;
}
let args = serde_json::json!({
"queries": ["memory"],
"token_budget": 200
});
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
let ctx = value["context"].as_str().unwrap();
// Context should be within budget (200 tokens * 4 = 800 chars + header overhead)
// The actual char count of context should be reasonable
let tokens_used = value["tokensUsed"].as_u64().unwrap();
// Allow some overhead for the header
assert!(tokens_used <= 300, "tokens_used {} should be near budget 200", tokens_used);
}
#[tokio::test]
async fn test_expandable_ids() {
let (storage, _dir) = test_storage().await;
// Ingest many memories
for i in 0..20 {
ingest_test_content(
&storage,
&format!(
"Expandable test memory {} with enough content to take up space in the token budget allocation.",
i
),
vec![],
)
.await;
}
let args = serde_json::json!({
"queries": ["expandable test memory"],
"token_budget": 150
});
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
// expandable should be a valid array (may be empty if all fit within budget)
assert!(value["expandable"].is_array());
}
#[tokio::test]
async fn test_automation_triggers_booleans() {
let (storage, _dir) = test_storage().await;
let result = execute(&storage, &test_cognitive(), None).await;
assert!(result.is_ok());
let value = result.unwrap();
let triggers = &value["automationTriggers"];
assert!(triggers["needsDream"].is_boolean());
assert!(triggers["needsBackup"].is_boolean());
assert!(triggers["needsGc"].is_boolean());
}
#[tokio::test]
async fn test_disable_sections() {
let (storage, _dir) = test_storage().await;
ingest_test_content(&storage, "Test memory for disable sections.", vec![]).await;
let args = serde_json::json!({
"include_status": false,
"include_intentions": false,
"include_predictions": false
});
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
let context_str = value["context"].as_str().unwrap();
// Should NOT contain status line when disabled
assert!(!context_str.contains("**Status:**"));
// automationTriggers should still be present (always computed)
assert!(value["automationTriggers"].is_object());
}
#[tokio::test]
async fn test_with_codebase_context() {
let (storage, _dir) = test_storage().await;
// Ingest a pattern with codebase tag
let input = IngestInput {
content: "Code pattern: Use Arc<Mutex<>> for shared state in async contexts.".to_string(),
node_type: "pattern".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec!["pattern".to_string(), "codebase:vestige".to_string()],
valid_from: None,
valid_until: None,
};
storage.ingest(input).unwrap();
let args = serde_json::json!({
"context": {
"codebase": "vestige",
"topics": ["performance"]
}
});
let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
let ctx = value["context"].as_str().unwrap();
// Should contain codebase section
assert!(ctx.contains("vestige"));
}
// ========================================================================
// HELPER TESTS
// ========================================================================
#[test]
fn test_first_sentence_period() {
assert_eq!(first_sentence("Hello world. More text here."), "Hello world.");
}
#[test]
fn test_first_sentence_newline() {
assert_eq!(first_sentence("First line\nSecond line"), "First line");
}
#[test]
fn test_first_sentence_short() {
assert_eq!(first_sentence("Short"), "Short");
}
#[test]
fn test_first_sentence_long_truncated() {
let long = "A".repeat(200);
let result = first_sentence(&long);
assert!(result.len() <= 150);
}
#[test]
fn test_first_sentence_empty() {
assert_eq!(first_sentence(""), "");
}
#[test]
fn test_first_sentence_whitespace() {
assert_eq!(first_sentence(" Hello world. "), "Hello world.");
}
}

View file

@ -114,7 +114,7 @@ struct BatchItem {
} }
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -184,16 +184,14 @@ pub async fn execute(
// ==================================================================== // ====================================================================
// INGEST (storage lock) // INGEST (storage lock)
// ==================================================================== // ====================================================================
let mut storage_guard = storage.lock().await;
// Check if force_create is enabled // Check if force_create is enabled
if args.force_create.unwrap_or(false) { if args.force_create.unwrap_or(false) {
let node = storage_guard.ingest(input).map_err(|e| e.to_string())?; let node = storage.ingest(input).map_err(|e| e.to_string())?;
let node_id = node.id.clone(); let node_id = node.id.clone();
let node_content = node.content.clone(); let node_content = node.content.clone();
let node_type = node.node_type.clone(); let node_type = node.node_type.clone();
let has_embedding = node.has_embedding.unwrap_or(false); let has_embedding = node.has_embedding.unwrap_or(false);
drop(storage_guard);
// Post-ingest cognitive side effects // Post-ingest cognitive side effects
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite); run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
@ -213,12 +211,11 @@ pub async fn execute(
// Use smart ingest with prediction error gating // Use smart ingest with prediction error gating
#[cfg(all(feature = "embeddings", feature = "vector-search"))] #[cfg(all(feature = "embeddings", feature = "vector-search"))]
{ {
let result = storage_guard.smart_ingest(input).map_err(|e| e.to_string())?; let result = storage.smart_ingest(input).map_err(|e| e.to_string())?;
let node_id = result.node.id.clone(); let node_id = result.node.id.clone();
let node_content = result.node.content.clone(); let node_content = result.node.content.clone();
let node_type = result.node.node_type.clone(); let node_type = result.node.node_type.clone();
let has_embedding = result.node.has_embedding.unwrap_or(false); let has_embedding = result.node.has_embedding.unwrap_or(false);
drop(storage_guard);
// Post-ingest cognitive side effects // Post-ingest cognitive side effects
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite); run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
@ -249,11 +246,10 @@ pub async fn execute(
#[cfg(not(all(feature = "embeddings", feature = "vector-search")))] #[cfg(not(all(feature = "embeddings", feature = "vector-search")))]
{ {
let node = storage_guard.ingest(input).map_err(|e| e.to_string())?; let node = storage.ingest(input).map_err(|e| e.to_string())?;
let node_id = node.id.clone(); let node_id = node.id.clone();
let node_content = node.content.clone(); let node_content = node.content.clone();
let node_type = node.node_type.clone(); let node_type = node.node_type.clone();
drop(storage_guard);
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite); run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
@ -276,7 +272,7 @@ pub async fn execute(
/// pre-ingest (importance scoring, intent detection) and post-ingest (synaptic /// pre-ingest (importance scoring, intent detection) and post-ingest (synaptic
/// tagging, novelty update, hippocampal indexing) pipelines per item. /// tagging, novelty update, hippocampal indexing) pipelines per item.
async fn execute_batch( async fn execute_batch(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
cognitive: &Arc<Mutex<CognitiveEngine>>, cognitive: &Arc<Mutex<CognitiveEngine>>,
items: Vec<BatchItem>, items: Vec<BatchItem>,
) -> Result<Value, String> { ) -> Result<Value, String> {
@ -355,16 +351,14 @@ async fn execute_batch(
// ================================================================ // ================================================================
// INGEST (storage lock per item) // INGEST (storage lock per item)
// ================================================================ // ================================================================
let mut storage_guard = storage.lock().await;
#[cfg(all(feature = "embeddings", feature = "vector-search"))] #[cfg(all(feature = "embeddings", feature = "vector-search"))]
{ {
match storage_guard.smart_ingest(input) { match storage.smart_ingest(input) {
Ok(result) => { Ok(result) => {
let node_id = result.node.id.clone(); let node_id = result.node.id.clone();
let node_content = result.node.content.clone(); let node_content = result.node.content.clone();
let node_type = result.node.node_type.clone(); let node_type = result.node.node_type.clone();
drop(storage_guard);
match result.decision.as_str() { match result.decision.as_str() {
"create" | "supersede" | "replace" => created += 1, "create" | "supersede" | "replace" => created += 1,
@ -386,7 +380,6 @@ async fn execute_batch(
})); }));
} }
Err(e) => { Err(e) => {
drop(storage_guard);
errors += 1; errors += 1;
results.push(serde_json::json!({ results.push(serde_json::json!({
"index": i, "index": i,
@ -399,12 +392,11 @@ async fn execute_batch(
#[cfg(not(all(feature = "embeddings", feature = "vector-search")))] #[cfg(not(all(feature = "embeddings", feature = "vector-search")))]
{ {
match storage_guard.ingest(input) { match storage.ingest(input) {
Ok(node) => { Ok(node) => {
let node_id = node.id.clone(); let node_id = node.id.clone();
let node_content = node.content.clone(); let node_content = node.content.clone();
let node_type = node.node_type.clone(); let node_type = node.node_type.clone();
drop(storage_guard);
created += 1; created += 1;
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite); run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
@ -419,7 +411,6 @@ async fn execute_batch(
})); }));
} }
Err(e) => { Err(e) => {
drop(storage_guard);
errors += 1; errors += 1;
results.push(serde_json::json!({ results.push(serde_json::json!({
"index": i, "index": i,
@ -498,10 +489,10 @@ mod tests {
} }
/// Create a test storage instance with a temporary database /// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
#[tokio::test] #[tokio::test]
@ -662,8 +653,7 @@ mod tests {
let result = execute(&storage, &test_cognitive(), Some(args)).await; let result = execute(&storage, &test_cognitive(), Some(args)).await;
assert!(result.is_ok()); assert!(result.is_ok());
let node_id = result.unwrap()["nodeId"].as_str().unwrap().to_string(); let node_id = result.unwrap()["nodeId"].as_str().unwrap().to_string();
let storage_lock = storage.lock().await; let node = storage.get_node(&node_id).unwrap().unwrap();
let node = storage_lock.get_node(&node_id).unwrap().unwrap();
assert_eq!(node.node_type, "fact"); assert_eq!(node.node_type, "fact");
} }

View file

@ -4,7 +4,6 @@
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{MemoryStats, Storage}; use vestige_core::{MemoryStats, Storage};
@ -24,8 +23,7 @@ pub fn health_schema() -> Value {
}) })
} }
pub async fn execute_stats(storage: &Arc<Mutex<Storage>>) -> Result<Value, String> { pub async fn execute_stats(storage: &Arc<Storage>) -> Result<Value, String> {
let storage = storage.lock().await;
let stats = storage.get_stats().map_err(|e| e.to_string())?; let stats = storage.get_stats().map_err(|e| e.to_string())?;
Ok(serde_json::json!({ Ok(serde_json::json!({
@ -42,8 +40,7 @@ pub async fn execute_stats(storage: &Arc<Mutex<Storage>>) -> Result<Value, Strin
})) }))
} }
pub async fn execute_health(storage: &Arc<Mutex<Storage>>) -> Result<Value, String> { pub async fn execute_health(storage: &Arc<Storage>) -> Result<Value, String> {
let storage = storage.lock().await;
let stats = storage.get_stats().map_err(|e| e.to_string())?; let stats = storage.get_stats().map_err(|e| e.to_string())?;
// Determine health status // Determine health status

View file

@ -5,7 +5,6 @@
use serde_json::Value; use serde_json::Value;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{ use vestige_core::{
CaptureWindow, ImportanceEvent, ImportanceEventType, CaptureWindow, ImportanceEvent, ImportanceEventType,
@ -71,7 +70,7 @@ pub fn stats_schema() -> Value {
/// Trigger an importance event to retroactively strengthen recent memories /// Trigger an importance event to retroactively strengthen recent memories
pub async fn execute_trigger( pub async fn execute_trigger(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args = args.ok_or("Missing arguments")?; let args = args.ok_or("Missing arguments")?;
@ -88,7 +87,6 @@ pub async fn execute_trigger(
let hours_back = args["hours_back"].as_f64().unwrap_or(9.0); let hours_back = args["hours_back"].as_f64().unwrap_or(9.0);
let hours_forward = args["hours_forward"].as_f64().unwrap_or(2.0); let hours_forward = args["hours_forward"].as_f64().unwrap_or(2.0);
let storage = storage.lock().await;
// Verify the trigger memory exists // Verify the trigger memory exists
let trigger_memory = storage.get_node(memory_id) let trigger_memory = storage.get_node(memory_id)
@ -158,7 +156,7 @@ pub async fn execute_trigger(
/// Find memories with active synaptic tags /// Find memories with active synaptic tags
pub async fn execute_find( pub async fn execute_find(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args = args.unwrap_or(serde_json::json!({})); let args = args.unwrap_or(serde_json::json!({}));
@ -166,7 +164,6 @@ pub async fn execute_find(
let min_strength = args["min_strength"].as_f64().unwrap_or(0.3); let min_strength = args["min_strength"].as_f64().unwrap_or(0.3);
let limit = args["limit"].as_i64().unwrap_or(20) as usize; let limit = args["limit"].as_i64().unwrap_or(20) as usize;
let storage = storage.lock().await;
// Get memories with high retention (proxy for "tagged") // Get memories with high retention (proxy for "tagged")
let memories = storage.get_all_nodes(200, 0) let memories = storage.get_all_nodes(200, 0)
@ -196,9 +193,8 @@ pub async fn execute_find(
/// Get synaptic tagging statistics /// Get synaptic tagging statistics
pub async fn execute_stats( pub async fn execute_stats(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let storage = storage.lock().await;
let memories = storage.get_all_nodes(500, 0) let memories = storage.get_all_nodes(500, 0)
.map_err(|e| e.to_string())?; .map_err(|e| e.to_string())?;

View file

@ -8,7 +8,7 @@ use serde::Deserialize;
use serde_json::Value; use serde_json::Value;
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage; use vestige_core::Storage;
@ -89,7 +89,7 @@ fn parse_datetime(s: &str) -> Result<DateTime<Utc>, String> {
/// Execute memory_timeline tool /// Execute memory_timeline tool
pub async fn execute( pub async fn execute(
storage: &Arc<Mutex<Storage>>, storage: &Arc<Storage>,
args: Option<Value>, args: Option<Value>,
) -> Result<Value, String> { ) -> Result<Value, String> {
let args: TimelineArgs = match args { let args: TimelineArgs = match args {
@ -130,7 +130,6 @@ pub async fn execute(
let limit = args.limit.unwrap_or(50).clamp(1, 200); let limit = args.limit.unwrap_or(50).clamp(1, 200);
let storage = storage.lock().await;
// Query memories in time range // Query memories in time range
let mut results = storage let mut results = storage
@ -189,15 +188,14 @@ mod tests {
use super::*; use super::*;
use tempfile::TempDir; use tempfile::TempDir;
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) { async fn test_storage() -> (Arc<Storage>, TempDir) {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap(); let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir) (Arc::new(storage), dir)
} }
async fn ingest_test_memory(storage: &Arc<Mutex<Storage>>, content: &str) { async fn ingest_test_memory(storage: &Arc<Storage>, content: &str) {
let mut s = storage.lock().await; storage.ingest(vestige_core::IngestInput {
s.ingest(vestige_core::IngestInput {
content: content.to_string(), content: content.to_string(),
node_type: "fact".to_string(), node_type: "fact".to_string(),
source: None, source: None,

View file

@ -115,7 +115,7 @@ It remembers.
## Important: Tool Limit ## Important: Tool Limit
Windsurf has a **hard cap of 100 tools** across all MCP servers. Vestige uses ~18 tools, leaving plenty of room for other servers. Windsurf has a **hard cap of 100 tools** across all MCP servers. Vestige uses 19 tools, leaving plenty of room for other servers.
--- ---

View file

@ -51,7 +51,7 @@ Quit Xcode completely (Cmd+Q) and reopen your project.
### 4. Verify ### 4. Verify
Type `/context` in the Agent panel. You should see `vestige` listed with 18 tools. Type `/context` in the Agent panel. You should see `vestige` listed with 19 tools.
--- ---

View file

@ -254,7 +254,7 @@ echo " Next steps:"
echo " 1. Restart Xcode (Cmd+Q, then reopen)" echo " 1. Restart Xcode (Cmd+Q, then reopen)"
echo " 2. Open your project" echo " 2. Open your project"
echo " 3. Type /context in the Agent panel" echo " 3. Type /context in the Agent panel"
echo " 4. You should see vestige listed with 18 tools" echo " 4. You should see vestige listed with 19 tools"
echo "" echo ""
echo " Try it:" echo " Try it:"
echo " \"Remember that this project uses SwiftUI with MVVM architecture\"" echo " \"Remember that this project uses SwiftUI with MVVM architecture\""