mirror of
https://github.com/samvallad33/vestige.git
synced 2026-04-25 00:36:22 +02:00
feat: Vestige v1.7.0 — 18 tools, automation triggers, SQLite perf
Tool consolidation: 23 → 18 tools - ingest merged into smart_ingest (single + batch mode) - session_checkpoint merged into smart_ingest batch (items param) - promote_memory/demote_memory merged into memory(action=promote/demote) - health_check/stats merged into system_status Automation triggers in system_status: - lastDreamTimestamp, savesSinceLastDream, lastBackupTimestamp, lastConsolidationTimestamp — enables Claude to conditionally trigger dream/backup/gc/find_duplicates at session start - Migration v6: dream_history table (dreams were in-memory only) - DreamHistoryRecord struct + save/query methods - Dream persistence in dream.rs (non-fatal on failure) SQLite performance: - PRAGMA mmap_size = 256MB (2-5x read speedup) - PRAGMA journal_size_limit = 64MB (prevents WAL bloat) - PRAGMA optimize = 0x10002 (fresh query planner stats on connect) - FTS5 segment merge during consolidation (20-40% keyword boost) - PRAGMA optimize during consolidation cycle 1,152 tests passing, 0 failures, release build clean.
This commit is contained in:
parent
33d8b6b405
commit
c29023dd80
20 changed files with 1478 additions and 168 deletions
26
CHANGELOG.md
26
CHANGELOG.md
|
|
@ -5,6 +5,32 @@ All notable changes to Vestige will be documented in this file.
|
|||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [1.7.0] - 2026-02-20
|
||||
|
||||
### Changed
|
||||
- **Tool consolidation: 23 → 18 tools** — merged redundant tools while maintaining 100% backward compatibility via deprecated redirects
|
||||
- **`ingest` → `smart_ingest`** — `ingest` was a duplicate of `smart_ingest`; now redirects automatically
|
||||
- **`session_checkpoint` → `smart_ingest` batch mode** — new `items` parameter on `smart_ingest` accepts up to 20 items, each running the full cognitive pipeline (importance scoring, intent detection, synaptic tagging, hippocampal indexing). Old `session_checkpoint` skipped the cognitive pipeline.
|
||||
- **`promote_memory` + `demote_memory` → `memory` unified** — new `promote` and `demote` actions on the `memory` tool with optional `reason` parameter and full cognitive feedback pipeline (reward signal, reconsolidation, competition)
|
||||
- **`health_check` + `stats` → `system_status`** — single tool returns combined health status, full statistics, FSRS preview, cognitive module health, state distribution, warnings, and recommendations
|
||||
- **CLAUDE.md automation overhaul** — all 18 tools now have explicit auto-trigger rules; session start expanded to 5 steps (added `system_status` + `predict`); full proactive behaviors table
|
||||
|
||||
### Added
|
||||
- `smart_ingest` batch mode with `items` parameter (max 20 items, full cognitive pipeline per item)
|
||||
- `memory` actions: `promote` and `demote` with optional `reason` parameter
|
||||
- `system_status` tool combining health check + statistics + cognitive health
|
||||
- 30 new tests (305 → 335)
|
||||
|
||||
### Deprecated (still work via redirects)
|
||||
- `ingest` → use `smart_ingest`
|
||||
- `session_checkpoint` → use `smart_ingest` with `items`
|
||||
- `promote_memory` → use `memory(action="promote")`
|
||||
- `demote_memory` → use `memory(action="demote")`
|
||||
- `health_check` → use `system_status`
|
||||
- `stats` → use `system_status`
|
||||
|
||||
---
|
||||
|
||||
## [1.6.0] - 2026-02-19
|
||||
|
||||
### Changed
|
||||
|
|
|
|||
77
CLAUDE.md
77
CLAUDE.md
|
|
@ -1,4 +1,4 @@
|
|||
# Vestige v1.5.0 — Cognitive Memory System
|
||||
# Vestige v1.7.0 — Cognitive Memory System
|
||||
|
||||
Vestige is your long-term memory. It implements real neuroscience: FSRS-6 spaced repetition, synaptic tagging, prediction error gating, hippocampal indexing, spreading activation, and 28 stateful cognitive modules. **Use it automatically.**
|
||||
|
||||
|
|
@ -12,70 +12,77 @@ Every conversation, before responding to the user:
|
|||
1. search("user preferences instructions") → recall who the user is
|
||||
2. search("[current project] context") → recall project patterns/decisions
|
||||
3. intention → check (with current context) → check for triggered reminders
|
||||
4. codebase → get_context (if coding) → load patterns and decisions
|
||||
4. system_status → get system health + stats
|
||||
5. predict → predict needed memories → proactive retrieval for context
|
||||
6. Check automationTriggers from system_status:
|
||||
- lastDreamTimestamp null OR >24h ago OR savesSinceLastDream > 50 → call dream
|
||||
- lastBackupTimestamp null OR >7 days ago → call backup
|
||||
- totalMemories > 700 → call find_duplicates
|
||||
- status == "degraded" or "critical" → call gc(dry_run: true)
|
||||
```
|
||||
|
||||
Say "Remembering..." then retrieve context before answering.
|
||||
|
||||
---
|
||||
|
||||
## The 23 Tools
|
||||
## The 18 Tools
|
||||
|
||||
### Core Memory (2 tools)
|
||||
### Core Memory (1 tool)
|
||||
| Tool | When to Use |
|
||||
|------|-------------|
|
||||
| `ingest` | Store facts, concepts, events. Raw insertion, no dedup. |
|
||||
| `smart_ingest` | **Default for all saves.** Uses Prediction Error Gating to auto-decide: create, update, reinforce, or supersede. Runs cognitive pipeline (4-channel importance scoring, intent detection, synaptic tagging, hippocampal indexing). |
|
||||
| `smart_ingest` | **Default for all saves.** Single mode: provide `content` for auto-decide CREATE/UPDATE/SUPERSEDE via Prediction Error Gating. Batch mode: provide `items` array (max 20) for session-end saves — each item runs full cognitive pipeline (importance scoring, intent detection, synaptic tagging, hippocampal indexing). |
|
||||
|
||||
### Unified Tools (4 tools)
|
||||
| Tool | Actions | When to Use |
|
||||
|------|---------|-------------|
|
||||
| `search` | query + filters | **Every time you need to recall anything.** Hybrid search (BM25 + semantic + RRF fusion). 7-stage pipeline: overfetch → rerank → temporal boost → accessibility filter → context match → competition → spreading activation. Searching strengthens memory (Testing Effect). |
|
||||
| `memory` | get, delete, state | Retrieve a full memory by ID, delete a memory, or check its cognitive state (Active/Dormant/Silent/Unavailable). |
|
||||
| `search` | query + filters | **Every time you need to recall anything.** Hybrid search (BM25 + semantic + convex combination fusion). 7-stage pipeline: overfetch → rerank → temporal boost → accessibility filter → context match → competition → spreading activation. Searching strengthens memory (Testing Effect). |
|
||||
| `memory` | get, delete, state, promote, demote | Retrieve a full memory by ID, delete a memory, check its cognitive state (Active/Dormant/Silent/Unavailable), promote (thumbs up — increases retrieval strength), or demote (thumbs down — decreases retrieval strength, does NOT delete). |
|
||||
| `codebase` | remember_pattern, remember_decision, get_context | Store and recall code patterns, architectural decisions, and project context. The killer differentiator. |
|
||||
| `intention` | set, check, update, list | Prospective memory — "remember to do X when Y happens". Supports time, context, and event triggers. |
|
||||
|
||||
### Feedback (2 tools)
|
||||
| Tool | When to Use |
|
||||
|------|-------------|
|
||||
| `promote_memory` | User confirms a memory was helpful or correct. Increases retrieval strength + triggers reward signal + reconsolidation. |
|
||||
| `demote_memory` | User says a memory was wrong or unhelpful. Decreases retrieval strength + updates competition model. Does NOT delete. |
|
||||
|
||||
### Temporal (2 tools)
|
||||
| Tool | When to Use |
|
||||
|------|-------------|
|
||||
| `memory_timeline` | Browse memories chronologically. Grouped by day. Filter by type, tags, date range. Detail levels: brief/summary/full. |
|
||||
| `memory_changelog` | Audit trail. Per-memory: state transitions. System-wide: consolidations + recent changes. |
|
||||
| `memory_timeline` | Browse memories chronologically. Grouped by day. Filter by type, tags, date range. When user references a time period ("last week", "yesterday"). |
|
||||
| `memory_changelog` | Audit trail. Per-memory: state transitions. System-wide: consolidations + recent changes. When debugging memory issues. |
|
||||
|
||||
### Cognitive (3 tools) — v1.5.0
|
||||
| Tool | When to Use |
|
||||
|------|-------------|
|
||||
| `dream` | Trigger memory consolidation — replays recent memories to discover hidden connections and synthesize insights. Like sleep for AI. |
|
||||
| `explore_connections` | Graph exploration. Actions: `chain` (reasoning path A→B), `associations` (spreading activation from a node), `bridges` (connecting memories between two nodes). |
|
||||
| `predict` | Proactive retrieval — predicts what memories you'll need next based on context, activity patterns, and learned behavior. |
|
||||
| `dream` | Trigger memory consolidation — replays recent memories to discover hidden connections and synthesize insights. At session start if >24h since last dream, after every 50 saves. |
|
||||
| `explore_connections` | Graph exploration. Actions: `chain` (reasoning path A→B), `associations` (spreading activation from a node), `bridges` (connecting memories between two nodes). When search returns 3+ related results. |
|
||||
| `predict` | Proactive retrieval — predicts what memories you'll need next based on context, activity patterns, and learned behavior. At session start, when switching projects. |
|
||||
|
||||
### Auto-Save & Dedup (3 tools)
|
||||
### Auto-Save & Dedup (2 tools)
|
||||
| Tool | When to Use |
|
||||
|------|-------------|
|
||||
| `importance_score` | Score content importance before deciding whether to save. 4-channel model: novelty, arousal, reward, attention. Composite > 0.6 = worth saving. |
|
||||
| `session_checkpoint` | **Batch save up to 20 items in one call.** Each routes through Prediction Error Gating. Use at session end or before context compaction. |
|
||||
| `find_duplicates` | Find near-duplicate memory clusters via cosine similarity. Returns merge/review suggestions. Run when memory count > 700 or on user request. |
|
||||
|
||||
### Maintenance (6 tools)
|
||||
### Maintenance (5 tools)
|
||||
| Tool | When to Use |
|
||||
|------|-------------|
|
||||
| `health_check` | System status: healthy/degraded/critical/empty. Actionable recommendations. |
|
||||
| `consolidate` | Run FSRS-6 consolidation cycle. Applies decay, generates embeddings, maintenance. Use when memories seem stale. |
|
||||
| `stats` | Full statistics: total count, retention distribution, embedding coverage, cognitive state breakdown. |
|
||||
| `backup` | Create SQLite database backup. Returns file path. |
|
||||
| `system_status` | **Combined health + stats.** Returns status (healthy/degraded/critical/empty), full statistics, FSRS preview, cognitive module health, state distribution, warnings, and recommendations. At session start. |
|
||||
| `consolidate` | Run FSRS-6 consolidation cycle. Applies decay, generates embeddings, maintenance. At session end, when retention drops. |
|
||||
| `backup` | Create SQLite database backup. Before major upgrades, weekly. |
|
||||
| `export` | Export memories as JSON/JSONL with tag and date filters. |
|
||||
| `gc` | Garbage collect low-retention memories. Defaults to dry_run=true for safety. |
|
||||
| `gc` | Garbage collect low-retention memories. When system_status shows degraded + high count. Defaults to dry_run=true. |
|
||||
|
||||
### Restore (1 tool)
|
||||
| Tool | When to Use |
|
||||
|------|-------------|
|
||||
| `restore` | Restore memories from a JSON backup file. Supports MCP wrapper, RecallResult, and direct array formats. |
|
||||
|
||||
### Deprecated (still work via redirects)
|
||||
| Old Tool | Redirects To |
|
||||
|----------|-------------|
|
||||
| `ingest` | `smart_ingest` |
|
||||
| `session_checkpoint` | `smart_ingest` (batch mode) |
|
||||
| `promote_memory` | `memory(action="promote")` |
|
||||
| `demote_memory` | `memory(action="demote")` |
|
||||
| `health_check` | `system_status` |
|
||||
| `stats` | `system_status` |
|
||||
|
||||
---
|
||||
|
||||
## Mandatory Save Gates
|
||||
|
|
@ -111,7 +118,7 @@ codebase({
|
|||
|
||||
### SESSION_END — Before stopping or compaction
|
||||
```
|
||||
session_checkpoint({
|
||||
smart_ingest({
|
||||
items: [
|
||||
{ content: "SESSION: [work done]\nFixes: [list]\nDecisions: [list]", tags: ["session-end", "[project]"] },
|
||||
// ... any unsaved fixes, decisions, patterns
|
||||
|
|
@ -127,7 +134,7 @@ session_checkpoint({
|
|||
|-----------|--------|
|
||||
| "Remember this" / "Don't forget" | `smart_ingest` immediately |
|
||||
| "I always..." / "I never..." / "I prefer..." | Save as preference |
|
||||
| "This is important" | `smart_ingest` + `promote_memory` |
|
||||
| "This is important" | `smart_ingest` + `memory(action="promote")` |
|
||||
| "Remind me..." / "Next time..." | `intention` → set |
|
||||
|
||||
---
|
||||
|
|
@ -136,7 +143,7 @@ session_checkpoint({
|
|||
|
||||
### Search Pipeline (7 stages)
|
||||
1. **Overfetch** — Pull 3x results from hybrid search (BM25 + semantic)
|
||||
2. **Reranker** — Re-score by relevance quality
|
||||
2. **Reranker** — Re-score by relevance quality (cross-encoder)
|
||||
3. **Temporal boost** — Recent memories get recency bonus
|
||||
4. **Accessibility filter** — FSRS-6 retention threshold (Ebbinghaus curve)
|
||||
5. **Context match** — Tulving 1973 encoding specificity (match current context to encoding context)
|
||||
|
|
@ -148,7 +155,7 @@ session_checkpoint({
|
|||
**Storage:** Prediction Error Gating decides create/update/reinforce/supersede
|
||||
**Post-ingest:** Synaptic tagging (Frey & Morris 1997) + novelty model update + hippocampal indexing + cross-project recording
|
||||
|
||||
### Feedback Pipeline
|
||||
### Feedback Pipeline (via memory promote/demote)
|
||||
**Promote:** Reward signal + importance boost + reconsolidation (memory becomes modifiable for 24-48h) + activation spread
|
||||
**Demote:** Competition suppression + retrieval strength decrease (does NOT delete — alternatives surface instead)
|
||||
|
||||
|
|
@ -169,12 +176,12 @@ All modules persist across tool calls as stateful instances:
|
|||
## Memory Hygiene
|
||||
|
||||
### Promote when:
|
||||
- User confirms memory was helpful
|
||||
- User confirms memory was helpful → `memory(action="promote")`
|
||||
- Solution worked correctly
|
||||
- Information was accurate
|
||||
|
||||
### Demote when:
|
||||
- User corrects a mistake
|
||||
- User corrects a mistake → `memory(action="demote")`
|
||||
- Information was wrong
|
||||
- Memory led to bad outcome
|
||||
|
||||
|
|
@ -195,8 +202,8 @@ Memory is retrieval. Searching strengthens memory. Search liberally, save aggres
|
|||
|
||||
## Development
|
||||
|
||||
- **Crate:** `vestige-mcp` v1.5.0, Rust 2024 edition
|
||||
- **Tests:** 305 tests, zero warnings (`cargo test -p vestige-mcp`)
|
||||
- **Crate:** `vestige-mcp` v1.7.0, Rust 2024 edition
|
||||
- **Tests:** 335 tests, zero warnings (`cargo test -p vestige-mcp`)
|
||||
- **Build:** `cargo build --release -p vestige-mcp`
|
||||
- **Features:** `embeddings` + `vector-search` (default on)
|
||||
- **Architecture:** `McpServer` holds `Arc<Mutex<Storage>>` + `Arc<Mutex<CognitiveEngine>>`
|
||||
|
|
|
|||
4
Cargo.lock
generated
4
Cargo.lock
generated
|
|
@ -3655,7 +3655,7 @@ checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a"
|
|||
|
||||
[[package]]
|
||||
name = "vestige-core"
|
||||
version = "1.6.0"
|
||||
version = "1.7.0"
|
||||
dependencies = [
|
||||
"chrono",
|
||||
"directories",
|
||||
|
|
@ -3689,7 +3689,7 @@ dependencies = [
|
|||
|
||||
[[package]]
|
||||
name = "vestige-mcp"
|
||||
version = "1.6.0"
|
||||
version = "1.7.0"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"axum",
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ exclude = [
|
|||
]
|
||||
|
||||
[workspace.package]
|
||||
version = "1.6.0"
|
||||
version = "1.7.0"
|
||||
edition = "2024"
|
||||
license = "AGPL-3.0-only"
|
||||
repository = "https://github.com/samvallad33/vestige"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[package]
|
||||
name = "vestige-core"
|
||||
version = "1.6.0"
|
||||
version = "1.7.0"
|
||||
edition = "2024"
|
||||
rust-version = "1.85"
|
||||
authors = ["Vestige Team"]
|
||||
|
|
|
|||
|
|
@ -138,8 +138,8 @@ pub use fsrs::{
|
|||
|
||||
// Storage layer
|
||||
pub use storage::{
|
||||
ConsolidationHistoryRecord, InsightRecord, IntentionRecord, Result, SmartIngestResult,
|
||||
StateTransitionRecord, Storage, StorageError,
|
||||
ConsolidationHistoryRecord, DreamHistoryRecord, InsightRecord, IntentionRecord, Result,
|
||||
SmartIngestResult, StateTransitionRecord, Storage, StorageError,
|
||||
};
|
||||
|
||||
// Consolidation (sleep-inspired memory processing)
|
||||
|
|
|
|||
|
|
@ -29,6 +29,11 @@ pub const MIGRATIONS: &[Migration] = &[
|
|||
description: "FSRS-6 upgrade: access history, ACT-R activation, personalized decay",
|
||||
up: MIGRATION_V5_UP,
|
||||
},
|
||||
Migration {
|
||||
version: 6,
|
||||
description: "Dream history persistence for automation triggers",
|
||||
up: MIGRATION_V6_UP,
|
||||
},
|
||||
];
|
||||
|
||||
/// A database migration
|
||||
|
|
@ -447,6 +452,26 @@ ALTER TABLE consolidation_history ADD COLUMN w20_optimized REAL;
|
|||
UPDATE schema_version SET version = 5, applied_at = datetime('now');
|
||||
"#;
|
||||
|
||||
/// V6: Dream history persistence for automation triggers
|
||||
/// Dreams were in-memory only (MemoryDreamer.dream_history Vec<DreamResult> lost on restart).
|
||||
/// This table persists dream metadata so system_status can report when last dream ran.
|
||||
const MIGRATION_V6_UP: &str = r#"
|
||||
CREATE TABLE IF NOT EXISTS dream_history (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
dreamed_at TEXT NOT NULL,
|
||||
duration_ms INTEGER NOT NULL DEFAULT 0,
|
||||
memories_replayed INTEGER NOT NULL DEFAULT 0,
|
||||
connections_found INTEGER NOT NULL DEFAULT 0,
|
||||
insights_generated INTEGER NOT NULL DEFAULT 0,
|
||||
memories_strengthened INTEGER NOT NULL DEFAULT 0,
|
||||
memories_compressed INTEGER NOT NULL DEFAULT 0
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_dream_history_dreamed_at ON dream_history(dreamed_at);
|
||||
|
||||
UPDATE schema_version SET version = 6, applied_at = datetime('now');
|
||||
"#;
|
||||
|
||||
/// Get current schema version from database
|
||||
pub fn get_current_version(conn: &rusqlite::Connection) -> rusqlite::Result<u32> {
|
||||
conn.query_row(
|
||||
|
|
|
|||
|
|
@ -11,6 +11,6 @@ mod sqlite;
|
|||
|
||||
pub use migrations::MIGRATIONS;
|
||||
pub use sqlite::{
|
||||
ConsolidationHistoryRecord, InsightRecord, IntentionRecord, Result, SmartIngestResult,
|
||||
StateTransitionRecord, Storage, StorageError,
|
||||
ConsolidationHistoryRecord, DreamHistoryRecord, InsightRecord, IntentionRecord, Result,
|
||||
SmartIngestResult, StateTransitionRecord, Storage, StorageError,
|
||||
};
|
||||
|
|
|
|||
|
|
@ -140,7 +140,10 @@ impl Storage {
|
|||
PRAGMA cache_size = -64000;
|
||||
PRAGMA temp_store = MEMORY;
|
||||
PRAGMA foreign_keys = ON;
|
||||
PRAGMA busy_timeout = 5000;",
|
||||
PRAGMA busy_timeout = 5000;
|
||||
PRAGMA mmap_size = 268435456;
|
||||
PRAGMA journal_size_limit = 67108864;
|
||||
PRAGMA optimize = 0x10002;",
|
||||
)?;
|
||||
|
||||
#[cfg(feature = "embeddings")]
|
||||
|
|
@ -1851,6 +1854,14 @@ impl Storage {
|
|||
// 15. Connection Graph Maintenance (decay + prune weak connections)
|
||||
let _connections_pruned = self.prune_weak_connections(0.05).unwrap_or(0) as i64;
|
||||
|
||||
// 16. FTS5 index optimization — merge segments for faster keyword search
|
||||
let _ = self.conn.execute_batch(
|
||||
"INSERT INTO knowledge_fts(knowledge_fts) VALUES('optimize');"
|
||||
);
|
||||
|
||||
// 17. Run PRAGMA optimize to refresh query planner statistics
|
||||
let _ = self.conn.execute_batch("PRAGMA optimize;");
|
||||
|
||||
let duration = start.elapsed().as_millis() as i64;
|
||||
|
||||
// Record consolidation history (bug fix: was never recorded before v1.4.0)
|
||||
|
|
@ -2310,6 +2321,18 @@ pub struct ConsolidationHistoryRecord {
|
|||
pub insights_generated: i32,
|
||||
}
|
||||
|
||||
/// Dream history record — persists dream metadata for automation triggers
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
pub struct DreamHistoryRecord {
|
||||
pub dreamed_at: DateTime<Utc>,
|
||||
pub duration_ms: i64,
|
||||
pub memories_replayed: i32,
|
||||
pub connections_found: i32,
|
||||
pub insights_generated: i32,
|
||||
pub memories_strengthened: i32,
|
||||
pub memories_compressed: i32,
|
||||
}
|
||||
|
||||
impl Storage {
|
||||
// ========================================================================
|
||||
// INTENTIONS PERSISTENCE
|
||||
|
|
@ -2849,6 +2872,84 @@ impl Storage {
|
|||
Ok(result)
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// DREAM HISTORY PERSISTENCE
|
||||
// ========================================================================
|
||||
|
||||
/// Save a dream history record
|
||||
pub fn save_dream_history(&mut self, record: &DreamHistoryRecord) -> Result<i64> {
|
||||
self.conn.execute(
|
||||
"INSERT INTO dream_history (
|
||||
dreamed_at, duration_ms, memories_replayed, connections_found,
|
||||
insights_generated, memories_strengthened, memories_compressed
|
||||
) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
|
||||
params![
|
||||
record.dreamed_at.to_rfc3339(),
|
||||
record.duration_ms,
|
||||
record.memories_replayed,
|
||||
record.connections_found,
|
||||
record.insights_generated,
|
||||
record.memories_strengthened,
|
||||
record.memories_compressed,
|
||||
],
|
||||
)?;
|
||||
Ok(self.conn.last_insert_rowid())
|
||||
}
|
||||
|
||||
/// Get last dream timestamp
|
||||
pub fn get_last_dream(&self) -> Result<Option<DateTime<Utc>>> {
|
||||
let result: Option<String> = self.conn.query_row(
|
||||
"SELECT MAX(dreamed_at) FROM dream_history",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
).ok().flatten();
|
||||
|
||||
Ok(result.and_then(|s| {
|
||||
DateTime::parse_from_rfc3339(&s).ok().map(|dt| dt.with_timezone(&Utc))
|
||||
}))
|
||||
}
|
||||
|
||||
/// Count memories created since a given timestamp
|
||||
pub fn count_memories_since(&self, since: DateTime<Utc>) -> Result<i64> {
|
||||
let count: i64 = self.conn.query_row(
|
||||
"SELECT COUNT(*) FROM knowledge_nodes WHERE created_at >= ?1",
|
||||
params![since.to_rfc3339()],
|
||||
|row| row.get(0),
|
||||
)?;
|
||||
Ok(count)
|
||||
}
|
||||
|
||||
/// Get last backup timestamp by scanning the backups directory.
|
||||
/// Parses `vestige-YYYYMMDD-HHMMSS.db` filenames.
|
||||
pub fn get_last_backup_timestamp() -> Option<DateTime<Utc>> {
|
||||
let proj_dirs = directories::ProjectDirs::from("com", "vestige", "core")?;
|
||||
let backup_dir = proj_dirs.data_dir().parent()?.join("backups");
|
||||
|
||||
if !backup_dir.exists() {
|
||||
return None;
|
||||
}
|
||||
|
||||
let mut latest: Option<DateTime<Utc>> = None;
|
||||
|
||||
if let Ok(entries) = std::fs::read_dir(&backup_dir) {
|
||||
for entry in entries.flatten() {
|
||||
let name = entry.file_name();
|
||||
let name_str = name.to_string_lossy();
|
||||
// Parse vestige-YYYYMMDD-HHMMSS.db
|
||||
if let Some(ts_part) = name_str.strip_prefix("vestige-").and_then(|s| s.strip_suffix(".db")) {
|
||||
if let Ok(naive) = chrono::NaiveDateTime::parse_from_str(ts_part, "%Y%m%d-%H%M%S") {
|
||||
let dt = naive.and_utc();
|
||||
if latest.as_ref().is_none_or(|l| dt > *l) {
|
||||
latest = Some(dt);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
latest
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// STATE TRANSITIONS (Audit Trail)
|
||||
// ========================================================================
|
||||
|
|
@ -3009,4 +3110,63 @@ mod tests {
|
|||
assert!(deleted);
|
||||
assert!(storage.get_node(&node.id).unwrap().is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_dream_history_save_and_get_last() {
|
||||
let mut storage = create_test_storage();
|
||||
let now = Utc::now();
|
||||
|
||||
let record = DreamHistoryRecord {
|
||||
dreamed_at: now,
|
||||
duration_ms: 1500,
|
||||
memories_replayed: 50,
|
||||
connections_found: 12,
|
||||
insights_generated: 3,
|
||||
memories_strengthened: 8,
|
||||
memories_compressed: 2,
|
||||
};
|
||||
|
||||
let id = storage.save_dream_history(&record).unwrap();
|
||||
assert!(id > 0);
|
||||
|
||||
let last = storage.get_last_dream().unwrap();
|
||||
assert!(last.is_some());
|
||||
// Timestamps should be within 1 second (RFC3339 round-trip)
|
||||
let diff = (last.unwrap() - now).num_seconds().abs();
|
||||
assert!(diff <= 1, "Timestamp mismatch: diff={}s", diff);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_dream_history_empty() {
|
||||
let storage = create_test_storage();
|
||||
let last = storage.get_last_dream().unwrap();
|
||||
assert!(last.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_count_memories_since() {
|
||||
let mut storage = create_test_storage();
|
||||
let before = Utc::now() - Duration::seconds(10);
|
||||
|
||||
for i in 0..5 {
|
||||
storage.ingest(IngestInput {
|
||||
content: format!("Count test memory {}", i),
|
||||
node_type: "fact".to_string(),
|
||||
..Default::default()
|
||||
}).unwrap();
|
||||
}
|
||||
|
||||
let count = storage.count_memories_since(before).unwrap();
|
||||
assert_eq!(count, 5);
|
||||
|
||||
let future = Utc::now() + Duration::hours(1);
|
||||
let count_future = storage.count_memories_since(future).unwrap();
|
||||
assert_eq!(count_future, 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_last_backup_timestamp_no_panic() {
|
||||
// Static method should not panic even if no backups exist
|
||||
let _ = Storage::get_last_backup_timestamp();
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[package]
|
||||
name = "vestige-mcp"
|
||||
version = "1.6.0"
|
||||
version = "1.7.0"
|
||||
edition = "2024"
|
||||
description = "Cognitive memory MCP server for Claude - FSRS-6, spreading activation, synaptic tagging, and 130 years of memory research"
|
||||
authors = ["samvallad33"]
|
||||
|
|
|
|||
|
|
@ -121,8 +121,8 @@ impl McpServer {
|
|||
recall past knowledge, and maintain context across sessions. The system uses \
|
||||
FSRS-6 spaced repetition to naturally decay memories over time. \
|
||||
\n\nFeedback Protocol: If the user explicitly confirms a memory was helpful, use \
|
||||
promote_memory. If they correct a hallucination or say a memory was wrong, use \
|
||||
demote_memory. Do not ask for permission - just act on their feedback.".to_string()
|
||||
memory(action='promote'). If they correct a hallucination or say a memory was wrong, use \
|
||||
memory(action='demote'). Do not ask for permission - just act on their feedback.".to_string()
|
||||
),
|
||||
};
|
||||
|
||||
|
|
@ -131,20 +131,19 @@ impl McpServer {
|
|||
|
||||
/// Handle tools/list request
|
||||
async fn handle_tools_list(&self) -> Result<serde_json::Value, JsonRpcError> {
|
||||
// v1.1: Only expose 8 core tools. Deprecated tools still work internally
|
||||
// for backward compatibility but are not listed.
|
||||
// v1.7: 18 tools. Deprecated tools still work via redirects in handle_tools_call.
|
||||
let tools = vec![
|
||||
// ================================================================
|
||||
// UNIFIED TOOLS (v1.1+)
|
||||
// ================================================================
|
||||
ToolDescription {
|
||||
name: "search".to_string(),
|
||||
description: Some("Unified search tool. Uses hybrid search (keyword + semantic + RRF fusion) internally. Auto-strengthens memories on access (Testing Effect).".to_string()),
|
||||
description: Some("Unified search tool. Uses hybrid search (keyword + semantic + convex combination fusion) internally. Auto-strengthens memories on access (Testing Effect).".to_string()),
|
||||
input_schema: tools::search_unified::schema(),
|
||||
},
|
||||
ToolDescription {
|
||||
name: "memory".to_string(),
|
||||
description: Some("Unified memory management tool. Actions: 'get' (retrieve full node), 'delete' (remove memory), 'state' (get accessibility state).".to_string()),
|
||||
description: Some("Unified memory management tool. Actions: 'get' (retrieve full node), 'delete' (remove memory), 'state' (get accessibility state), 'promote' (thumbs up — increases retrieval strength), 'demote' (thumbs down — decreases retrieval strength, does NOT delete).".to_string()),
|
||||
input_schema: tools::memory_unified::schema(),
|
||||
},
|
||||
ToolDescription {
|
||||
|
|
@ -158,32 +157,14 @@ impl McpServer {
|
|||
input_schema: tools::intention_unified::schema(),
|
||||
},
|
||||
// ================================================================
|
||||
// Core memory tools
|
||||
// CORE MEMORY (v1.7: smart_ingest absorbs ingest + checkpoint)
|
||||
// ================================================================
|
||||
ToolDescription {
|
||||
name: "ingest".to_string(),
|
||||
description: Some("Add new knowledge to memory. Use for facts, concepts, decisions, or any information worth remembering.".to_string()),
|
||||
input_schema: tools::ingest::schema(),
|
||||
},
|
||||
ToolDescription {
|
||||
name: "smart_ingest".to_string(),
|
||||
description: Some("INTELLIGENT memory ingestion with Prediction Error Gating. Automatically decides whether to CREATE new, UPDATE existing, or SUPERSEDE outdated memories based on semantic similarity. Solves the 'bad vs good similar memory' problem.".to_string()),
|
||||
description: Some("INTELLIGENT memory ingestion with Prediction Error Gating. Single mode: provide 'content' to auto-decide CREATE/UPDATE/SUPERSEDE. Batch mode: provide 'items' array (max 20) for session-end saves — each item runs the full cognitive pipeline (importance scoring, intent detection, synaptic tagging).".to_string()),
|
||||
input_schema: tools::smart_ingest::schema(),
|
||||
},
|
||||
// ================================================================
|
||||
// Feedback tools
|
||||
// ================================================================
|
||||
ToolDescription {
|
||||
name: "promote_memory".to_string(),
|
||||
description: Some("Promote a memory (thumbs up). Use when a memory led to a good outcome. Increases retrieval strength so it surfaces more often.".to_string()),
|
||||
input_schema: tools::feedback::promote_schema(),
|
||||
},
|
||||
ToolDescription {
|
||||
name: "demote_memory".to_string(),
|
||||
description: Some("Demote a memory (thumbs down). Use when a memory led to a bad outcome or was wrong. Decreases retrieval strength so better alternatives surface. Does NOT delete.".to_string()),
|
||||
input_schema: tools::feedback::demote_schema(),
|
||||
},
|
||||
// ================================================================
|
||||
// TEMPORAL TOOLS (v1.2+)
|
||||
// ================================================================
|
||||
ToolDescription {
|
||||
|
|
@ -197,23 +178,18 @@ impl McpServer {
|
|||
input_schema: tools::changelog::schema(),
|
||||
},
|
||||
// ================================================================
|
||||
// MAINTENANCE TOOLS (v1.2+)
|
||||
// MAINTENANCE TOOLS (v1.7: system_status replaces health_check + stats)
|
||||
// ================================================================
|
||||
ToolDescription {
|
||||
name: "health_check".to_string(),
|
||||
description: Some("System health status with warnings and recommendations. Returns status (healthy/degraded/critical/empty), stats, and actionable advice.".to_string()),
|
||||
input_schema: tools::maintenance::health_check_schema(),
|
||||
name: "system_status".to_string(),
|
||||
description: Some("Combined system health and statistics. Returns status (healthy/degraded/critical/empty), full stats, FSRS preview, cognitive module health, state distribution, warnings, and recommendations.".to_string()),
|
||||
input_schema: tools::maintenance::system_status_schema(),
|
||||
},
|
||||
ToolDescription {
|
||||
name: "consolidate".to_string(),
|
||||
description: Some("Run FSRS-6 memory consolidation cycle. Applies decay, generates embeddings, and performs maintenance. Use when memories seem stale.".to_string()),
|
||||
input_schema: tools::maintenance::consolidate_schema(),
|
||||
},
|
||||
ToolDescription {
|
||||
name: "stats".to_string(),
|
||||
description: Some("Full memory system statistics including total count, retention distribution, embedding coverage, and cognitive state breakdown.".to_string()),
|
||||
input_schema: tools::maintenance::stats_schema(),
|
||||
},
|
||||
ToolDescription {
|
||||
name: "backup".to_string(),
|
||||
description: Some("Create a SQLite database backup. Returns the backup file path.".to_string()),
|
||||
|
|
@ -237,11 +213,6 @@ impl McpServer {
|
|||
description: Some("Score content importance using 4-channel neuroscience model (novelty/arousal/reward/attention). Returns composite score, channel breakdown, encoding boost, and explanations.".to_string()),
|
||||
input_schema: tools::importance::schema(),
|
||||
},
|
||||
ToolDescription {
|
||||
name: "session_checkpoint".to_string(),
|
||||
description: Some("Batch save up to 20 items in one call. Each item routes through Prediction Error Gating (smart_ingest). Use at session end or before context compaction to save all unsaved work.".to_string()),
|
||||
input_schema: tools::checkpoint::schema(),
|
||||
},
|
||||
ToolDescription {
|
||||
name: "find_duplicates".to_string(),
|
||||
description: Some("Find duplicate and near-duplicate memory clusters using cosine similarity on embeddings. Returns clusters with suggested actions (merge/review). Use to clean up redundant memories.".to_string()),
|
||||
|
|
@ -300,15 +271,80 @@ impl McpServer {
|
|||
// UNIFIED TOOLS (v1.1+) - Preferred API
|
||||
// ================================================================
|
||||
"search" => tools::search_unified::execute(&self.storage, &self.cognitive, request.arguments).await,
|
||||
"memory" => tools::memory_unified::execute(&self.storage, request.arguments).await,
|
||||
"memory" => tools::memory_unified::execute(&self.storage, &self.cognitive, request.arguments).await,
|
||||
"codebase" => tools::codebase_unified::execute(&self.storage, &self.cognitive, request.arguments).await,
|
||||
"intention" => tools::intention_unified::execute(&self.storage, &self.cognitive, request.arguments).await,
|
||||
|
||||
// ================================================================
|
||||
// Core memory tools
|
||||
// Core memory (v1.7: smart_ingest absorbs ingest + checkpoint)
|
||||
// ================================================================
|
||||
"ingest" => tools::ingest::execute(&self.storage, &self.cognitive, request.arguments).await,
|
||||
"smart_ingest" => tools::smart_ingest::execute(&self.storage, &self.cognitive, request.arguments).await,
|
||||
|
||||
// ================================================================
|
||||
// DEPRECATED (v1.7): ingest → smart_ingest
|
||||
// ================================================================
|
||||
"ingest" => {
|
||||
warn!("Tool 'ingest' is deprecated in v1.7. Use 'smart_ingest' instead.");
|
||||
tools::smart_ingest::execute(&self.storage, &self.cognitive, request.arguments).await
|
||||
}
|
||||
|
||||
// ================================================================
|
||||
// DEPRECATED (v1.7): session_checkpoint → smart_ingest (batch mode)
|
||||
// ================================================================
|
||||
"session_checkpoint" => {
|
||||
warn!("Tool 'session_checkpoint' is deprecated in v1.7. Use 'smart_ingest' with 'items' parameter instead.");
|
||||
tools::smart_ingest::execute(&self.storage, &self.cognitive, request.arguments).await
|
||||
}
|
||||
|
||||
// ================================================================
|
||||
// DEPRECATED (v1.7): promote_memory → memory(action='promote')
|
||||
// ================================================================
|
||||
"promote_memory" => {
|
||||
warn!("Tool 'promote_memory' is deprecated in v1.7. Use 'memory' with action='promote' instead.");
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let mut new_args = args.clone();
|
||||
if let Some(obj) = new_args.as_object_mut() {
|
||||
obj.insert("action".to_string(), serde_json::json!("promote"));
|
||||
}
|
||||
Some(new_args)
|
||||
}
|
||||
None => Some(serde_json::json!({"action": "promote"})),
|
||||
};
|
||||
tools::memory_unified::execute(&self.storage, &self.cognitive, unified_args).await
|
||||
}
|
||||
"demote_memory" => {
|
||||
warn!("Tool 'demote_memory' is deprecated in v1.7. Use 'memory' with action='demote' instead.");
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let mut new_args = args.clone();
|
||||
if let Some(obj) = new_args.as_object_mut() {
|
||||
obj.insert("action".to_string(), serde_json::json!("demote"));
|
||||
}
|
||||
Some(new_args)
|
||||
}
|
||||
None => Some(serde_json::json!({"action": "demote"})),
|
||||
};
|
||||
tools::memory_unified::execute(&self.storage, &self.cognitive, unified_args).await
|
||||
}
|
||||
|
||||
// ================================================================
|
||||
// DEPRECATED (v1.7): health_check, stats → system_status
|
||||
// ================================================================
|
||||
"health_check" => {
|
||||
warn!("Tool 'health_check' is deprecated in v1.7. Use 'system_status' instead.");
|
||||
tools::maintenance::execute_system_status(&self.storage, &self.cognitive, request.arguments).await
|
||||
}
|
||||
"stats" => {
|
||||
warn!("Tool 'stats' is deprecated in v1.7. Use 'system_status' instead.");
|
||||
tools::maintenance::execute_system_status(&self.storage, &self.cognitive, request.arguments).await
|
||||
}
|
||||
|
||||
// ================================================================
|
||||
// SYSTEM STATUS (v1.7: replaces health_check + stats)
|
||||
// ================================================================
|
||||
"system_status" => tools::maintenance::execute_system_status(&self.storage, &self.cognitive, request.arguments).await,
|
||||
|
||||
"mark_reviewed" => tools::review::execute(&self.storage, request.arguments).await,
|
||||
|
||||
// ================================================================
|
||||
|
|
@ -324,7 +360,6 @@ impl McpServer {
|
|||
// ================================================================
|
||||
"get_knowledge" => {
|
||||
warn!("Tool 'get_knowledge' is deprecated. Use 'memory' with action='get' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let id = args.get("id").cloned().unwrap_or(serde_json::Value::Null);
|
||||
|
|
@ -335,11 +370,10 @@ impl McpServer {
|
|||
}
|
||||
None => None,
|
||||
};
|
||||
tools::memory_unified::execute(&self.storage, unified_args).await
|
||||
tools::memory_unified::execute(&self.storage, &self.cognitive, unified_args).await
|
||||
}
|
||||
"delete_knowledge" => {
|
||||
warn!("Tool 'delete_knowledge' is deprecated. Use 'memory' with action='delete' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let id = args.get("id").cloned().unwrap_or(serde_json::Value::Null);
|
||||
|
|
@ -350,11 +384,10 @@ impl McpServer {
|
|||
}
|
||||
None => None,
|
||||
};
|
||||
tools::memory_unified::execute(&self.storage, unified_args).await
|
||||
tools::memory_unified::execute(&self.storage, &self.cognitive, unified_args).await
|
||||
}
|
||||
"get_memory_state" => {
|
||||
warn!("Tool 'get_memory_state' is deprecated. Use 'memory' with action='state' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let id = args.get("memory_id").cloned().unwrap_or(serde_json::Value::Null);
|
||||
|
|
@ -365,7 +398,7 @@ impl McpServer {
|
|||
}
|
||||
None => None,
|
||||
};
|
||||
tools::memory_unified::execute(&self.storage, unified_args).await
|
||||
tools::memory_unified::execute(&self.storage, &self.cognitive, unified_args).await
|
||||
}
|
||||
|
||||
// ================================================================
|
||||
|
|
@ -373,7 +406,6 @@ impl McpServer {
|
|||
// ================================================================
|
||||
"remember_pattern" => {
|
||||
warn!("Tool 'remember_pattern' is deprecated. Use 'codebase' with action='remember_pattern' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let mut new_args = args.clone();
|
||||
|
|
@ -388,7 +420,6 @@ impl McpServer {
|
|||
}
|
||||
"remember_decision" => {
|
||||
warn!("Tool 'remember_decision' is deprecated. Use 'codebase' with action='remember_decision' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let mut new_args = args.clone();
|
||||
|
|
@ -403,7 +434,6 @@ impl McpServer {
|
|||
}
|
||||
"get_codebase_context" => {
|
||||
warn!("Tool 'get_codebase_context' is deprecated. Use 'codebase' with action='get_context' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let mut new_args = args.clone();
|
||||
|
|
@ -422,7 +452,6 @@ impl McpServer {
|
|||
// ================================================================
|
||||
"set_intention" => {
|
||||
warn!("Tool 'set_intention' is deprecated. Use 'intention' with action='set' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let mut new_args = args.clone();
|
||||
|
|
@ -437,7 +466,6 @@ impl McpServer {
|
|||
}
|
||||
"check_intentions" => {
|
||||
warn!("Tool 'check_intentions' is deprecated. Use 'intention' with action='check' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let mut new_args = args.clone();
|
||||
|
|
@ -452,7 +480,6 @@ impl McpServer {
|
|||
}
|
||||
"complete_intention" => {
|
||||
warn!("Tool 'complete_intention' is deprecated. Use 'intention' with action='update', status='complete' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let id = args.get("intentionId").cloned().unwrap_or(serde_json::Value::Null);
|
||||
|
|
@ -468,7 +495,6 @@ impl McpServer {
|
|||
}
|
||||
"snooze_intention" => {
|
||||
warn!("Tool 'snooze_intention' is deprecated. Use 'intention' with action='update', status='snooze' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let id = args.get("intentionId").cloned().unwrap_or(serde_json::Value::Null);
|
||||
|
|
@ -486,13 +512,11 @@ impl McpServer {
|
|||
}
|
||||
"list_intentions" => {
|
||||
warn!("Tool 'list_intentions' is deprecated. Use 'intention' with action='list' instead.");
|
||||
// Transform arguments to unified format
|
||||
let unified_args = match request.arguments {
|
||||
Some(ref args) => {
|
||||
let mut new_args = args.clone();
|
||||
if let Some(obj) = new_args.as_object_mut() {
|
||||
obj.insert("action".to_string(), serde_json::json!("list"));
|
||||
// Rename 'status' to 'filter_status' if present
|
||||
if let Some(status) = obj.remove("status") {
|
||||
obj.insert("filter_status".to_string(), status);
|
||||
}
|
||||
|
|
@ -505,12 +529,7 @@ impl McpServer {
|
|||
}
|
||||
|
||||
// ================================================================
|
||||
// Stats and maintenance - REMOVED from MCP in v1.1
|
||||
// Use CLI instead: vestige stats, vestige health, vestige consolidate
|
||||
// ================================================================
|
||||
|
||||
// ================================================================
|
||||
// Neuroscience tools (not deprecated, except get_memory_state above)
|
||||
// Neuroscience tools (internal, not in tools/list)
|
||||
// ================================================================
|
||||
"list_by_state" => tools::memory_states::execute_list(&self.storage, request.arguments).await,
|
||||
"state_stats" => tools::memory_states::execute_stats(&self.storage).await,
|
||||
|
|
@ -520,10 +539,8 @@ impl McpServer {
|
|||
"match_context" => tools::context::execute(&self.storage, request.arguments).await,
|
||||
|
||||
// ================================================================
|
||||
// Feedback / preference learning (not deprecated)
|
||||
// Feedback (internal, still used by request_feedback)
|
||||
// ================================================================
|
||||
"promote_memory" => tools::feedback::execute_promote(&self.storage, &self.cognitive, request.arguments).await,
|
||||
"demote_memory" => tools::feedback::execute_demote(&self.storage, &self.cognitive, request.arguments).await,
|
||||
"request_feedback" => tools::feedback::execute_request_feedback(&self.storage, request.arguments).await,
|
||||
|
||||
// ================================================================
|
||||
|
|
@ -533,11 +550,9 @@ impl McpServer {
|
|||
"memory_changelog" => tools::changelog::execute(&self.storage, request.arguments).await,
|
||||
|
||||
// ================================================================
|
||||
// MAINTENANCE TOOLS (v1.2+)
|
||||
// MAINTENANCE TOOLS (v1.2+, non-deprecated)
|
||||
// ================================================================
|
||||
"health_check" => tools::maintenance::execute_health_check(&self.storage, request.arguments).await,
|
||||
"consolidate" => tools::maintenance::execute_consolidate(&self.storage, request.arguments).await,
|
||||
"stats" => tools::maintenance::execute_stats(&self.storage, &self.cognitive, request.arguments).await,
|
||||
"backup" => tools::maintenance::execute_backup(&self.storage, request.arguments).await,
|
||||
"export" => tools::maintenance::execute_export(&self.storage, request.arguments).await,
|
||||
"gc" => tools::maintenance::execute_gc(&self.storage, request.arguments).await,
|
||||
|
|
@ -546,7 +561,6 @@ impl McpServer {
|
|||
// AUTO-SAVE & DEDUP TOOLS (v1.3+)
|
||||
// ================================================================
|
||||
"importance_score" => tools::importance::execute(&self.storage, &self.cognitive, request.arguments).await,
|
||||
"session_checkpoint" => tools::checkpoint::execute(&self.storage, request.arguments).await,
|
||||
"find_duplicates" => tools::dedup::execute(&self.storage, request.arguments).await,
|
||||
|
||||
// ================================================================
|
||||
|
|
@ -899,8 +913,8 @@ mod tests {
|
|||
let result = response.result.unwrap();
|
||||
let tools = result["tools"].as_array().unwrap();
|
||||
|
||||
// v1.3+: 19 tools (8 unified + 2 temporal + 6 maintenance + 3 auto-save/dedup)
|
||||
assert_eq!(tools.len(), 23, "Expected exactly 23 tools in v1.5+");
|
||||
// v1.7: 18 tools (4 unified + 1 core + 2 temporal + 5 maintenance + 2 auto-save + 3 cognitive + 1 restore)
|
||||
assert_eq!(tools.len(), 18, "Expected exactly 18 tools in v1.7+");
|
||||
|
||||
let tool_names: Vec<&str> = tools
|
||||
.iter()
|
||||
|
|
@ -913,29 +927,30 @@ mod tests {
|
|||
assert!(tool_names.contains(&"codebase"));
|
||||
assert!(tool_names.contains(&"intention"));
|
||||
|
||||
// Core tools
|
||||
assert!(tool_names.contains(&"ingest"));
|
||||
// Core memory (smart_ingest absorbs ingest + checkpoint in v1.7)
|
||||
assert!(tool_names.contains(&"smart_ingest"));
|
||||
assert!(!tool_names.contains(&"ingest"), "ingest should be removed in v1.7");
|
||||
assert!(!tool_names.contains(&"session_checkpoint"), "session_checkpoint should be removed in v1.7");
|
||||
|
||||
// Feedback tools
|
||||
assert!(tool_names.contains(&"promote_memory"));
|
||||
assert!(tool_names.contains(&"demote_memory"));
|
||||
// Feedback merged into memory tool (v1.7)
|
||||
assert!(!tool_names.contains(&"promote_memory"), "promote_memory should be removed in v1.7");
|
||||
assert!(!tool_names.contains(&"demote_memory"), "demote_memory should be removed in v1.7");
|
||||
|
||||
// Temporal tools (v1.2)
|
||||
assert!(tool_names.contains(&"memory_timeline"));
|
||||
assert!(tool_names.contains(&"memory_changelog"));
|
||||
|
||||
// Maintenance tools (v1.2)
|
||||
assert!(tool_names.contains(&"health_check"));
|
||||
// Maintenance tools (v1.7: system_status replaces health_check + stats)
|
||||
assert!(tool_names.contains(&"system_status"));
|
||||
assert!(!tool_names.contains(&"health_check"), "health_check should be removed in v1.7");
|
||||
assert!(!tool_names.contains(&"stats"), "stats should be removed in v1.7");
|
||||
assert!(tool_names.contains(&"consolidate"));
|
||||
assert!(tool_names.contains(&"stats"));
|
||||
assert!(tool_names.contains(&"backup"));
|
||||
assert!(tool_names.contains(&"export"));
|
||||
assert!(tool_names.contains(&"gc"));
|
||||
|
||||
// Auto-save & dedup tools (v1.3)
|
||||
assert!(tool_names.contains(&"importance_score"));
|
||||
assert!(tool_names.contains(&"session_checkpoint"));
|
||||
assert!(tool_names.contains(&"find_duplicates"));
|
||||
|
||||
// Cognitive tools (v1.5)
|
||||
|
|
|
|||
|
|
@ -4,8 +4,9 @@
|
|||
use std::sync::Arc;
|
||||
use tokio::sync::Mutex;
|
||||
|
||||
use chrono::Utc;
|
||||
use crate::cognitive::CognitiveEngine;
|
||||
use vestige_core::Storage;
|
||||
use vestige_core::{DreamHistoryRecord, Storage};
|
||||
|
||||
pub fn schema() -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
|
|
@ -31,8 +32,8 @@ pub async fn execute(
|
|||
.and_then(|v| v.as_u64())
|
||||
.unwrap_or(50) as usize;
|
||||
|
||||
let storage = storage.lock().await;
|
||||
let all_nodes = storage.get_all_nodes(memory_count as i32, 0)
|
||||
let storage_guard = storage.lock().await;
|
||||
let all_nodes = storage_guard.get_all_nodes(memory_count as i32, 0)
|
||||
.map_err(|e| format!("Failed to load memories: {}", e))?;
|
||||
|
||||
if all_nodes.len() < 5 {
|
||||
|
|
@ -47,18 +48,36 @@ pub async fn execute(
|
|||
vestige_core::DreamMemory {
|
||||
id: n.id.clone(),
|
||||
content: n.content.clone(),
|
||||
embedding: storage.get_node_embedding(&n.id).ok().flatten(),
|
||||
embedding: storage_guard.get_node_embedding(&n.id).ok().flatten(),
|
||||
tags: n.tags.clone(),
|
||||
created_at: n.created_at,
|
||||
access_count: n.reps as u32,
|
||||
}
|
||||
}).collect();
|
||||
// Drop storage lock before taking cognitive lock (strict ordering)
|
||||
drop(storage);
|
||||
drop(storage_guard);
|
||||
|
||||
let cog = cognitive.lock().await;
|
||||
let dream_result = cog.dreamer.dream(&dream_memories).await;
|
||||
let insights = cog.dreamer.synthesize_insights(&dream_memories);
|
||||
drop(cog);
|
||||
|
||||
// Persist dream history (non-fatal on failure — dream still happened)
|
||||
{
|
||||
let mut storage_guard = storage.lock().await;
|
||||
let record = DreamHistoryRecord {
|
||||
dreamed_at: Utc::now(),
|
||||
duration_ms: dream_result.duration_ms as i64,
|
||||
memories_replayed: dream_memories.len() as i32,
|
||||
connections_found: dream_result.new_connections_found as i32,
|
||||
insights_generated: dream_result.insights_generated.len() as i32,
|
||||
memories_strengthened: dream_result.memories_strengthened as i32,
|
||||
memories_compressed: dream_result.memories_compressed as i32,
|
||||
};
|
||||
if let Err(e) = storage_guard.save_dream_history(&record) {
|
||||
tracing::warn!("Failed to persist dream history: {}", e);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(serde_json::json!({
|
||||
"status": "dreamed",
|
||||
|
|
@ -189,4 +208,28 @@ mod tests {
|
|||
assert!(value["stats"]["insights_generated"].is_number());
|
||||
assert!(value["stats"]["duration_ms"].is_number());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_dream_persists_to_database() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
ingest_n_memories(&storage, 10).await;
|
||||
|
||||
// Before dream: no dream history
|
||||
{
|
||||
let s = storage.lock().await;
|
||||
assert!(s.get_last_dream().unwrap().is_none());
|
||||
}
|
||||
|
||||
let result = execute(&storage, &test_cognitive(), None).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["status"], "dreamed");
|
||||
|
||||
// After dream: dream history should exist
|
||||
{
|
||||
let s = storage.lock().await;
|
||||
let last = s.get_last_dream().unwrap();
|
||||
assert!(last.is_some(), "Dream should have been persisted to database");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -280,7 +280,7 @@ pub async fn execute_request_feedback(
|
|||
"description": "Give Claude a custom instruction (e.g., 'update this memory', 'merge with X', 'add tag Y')"
|
||||
}
|
||||
],
|
||||
"instruction": "PRESENT THESE OPTIONS TO THE USER. If they choose A, call promote_memory. If B, call demote_memory. If C, they will provide a custom instruction - execute it (could be: update the memory content, delete it, merge it, add tags, research something, etc.)."
|
||||
"instruction": "PRESENT THESE OPTIONS TO THE USER. If they choose A, call memory(action='promote'). If B, call memory(action='demote'). If C, they will provide a custom instruction - execute it (could be: update the memory content, delete it, merge it, add tags, research something, etc.)."
|
||||
}))
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
//! Maintenance MCP Tools
|
||||
//!
|
||||
//! Exposes CLI-only operations as MCP tools so Claude can trigger them automatically:
|
||||
//! health_check, consolidate, stats, backup, export, gc.
|
||||
//! system_status, consolidate, backup, export, gc.
|
||||
|
||||
use chrono::{NaiveDate, Utc};
|
||||
use serde::Deserialize;
|
||||
|
|
@ -17,6 +17,8 @@ use vestige_core::{FSRSScheduler, MemoryLifecycle, MemoryState, Storage};
|
|||
// SCHEMAS
|
||||
// ============================================================================
|
||||
|
||||
/// Deprecated in v1.7 — use system_status_schema() instead
|
||||
#[allow(dead_code)]
|
||||
pub fn health_check_schema() -> Value {
|
||||
serde_json::json!({
|
||||
"type": "object",
|
||||
|
|
@ -31,6 +33,8 @@ pub fn consolidate_schema() -> Value {
|
|||
})
|
||||
}
|
||||
|
||||
/// Deprecated in v1.7 — use system_status_schema() instead
|
||||
#[allow(dead_code)]
|
||||
pub fn stats_schema() -> Value {
|
||||
serde_json::json!({
|
||||
"type": "object",
|
||||
|
|
@ -97,11 +101,203 @@ pub fn gc_schema() -> Value {
|
|||
})
|
||||
}
|
||||
|
||||
/// Combined system status schema (replaces health_check + stats in v1.7.0)
|
||||
pub fn system_status_schema() -> Value {
|
||||
serde_json::json!({
|
||||
"type": "object",
|
||||
"properties": {}
|
||||
})
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// EXECUTE FUNCTIONS
|
||||
// ============================================================================
|
||||
|
||||
/// Health check tool
|
||||
/// Combined system status tool (merges health_check + stats, v1.7.0)
|
||||
///
|
||||
/// Returns system health status, full statistics, FSRS preview,
|
||||
/// cognitive module health, state distribution, and actionable recommendations.
|
||||
pub async fn execute_system_status(
|
||||
storage: &Arc<Mutex<Storage>>,
|
||||
cognitive: &Arc<Mutex<CognitiveEngine>>,
|
||||
_args: Option<Value>,
|
||||
) -> Result<Value, String> {
|
||||
let storage_guard = storage.lock().await;
|
||||
let stats = storage_guard.get_stats().map_err(|e| e.to_string())?;
|
||||
|
||||
// === Health assessment ===
|
||||
let status = if stats.total_nodes == 0 {
|
||||
"empty"
|
||||
} else if stats.average_retention < 0.3 {
|
||||
"critical"
|
||||
} else if stats.average_retention < 0.5 {
|
||||
"degraded"
|
||||
} else {
|
||||
"healthy"
|
||||
};
|
||||
|
||||
let embedding_coverage = if stats.total_nodes > 0 {
|
||||
(stats.nodes_with_embeddings as f64 / stats.total_nodes as f64) * 100.0
|
||||
} else {
|
||||
0.0
|
||||
};
|
||||
|
||||
let embedding_ready = storage_guard.is_embedding_ready();
|
||||
|
||||
let mut warnings = Vec::new();
|
||||
if stats.average_retention < 0.5 && stats.total_nodes > 0 {
|
||||
warnings.push("Low average retention - consider running consolidation");
|
||||
}
|
||||
if stats.nodes_due_for_review > 10 {
|
||||
warnings.push("Many memories are due for review");
|
||||
}
|
||||
if stats.total_nodes > 0 && stats.nodes_with_embeddings == 0 {
|
||||
warnings.push("No embeddings generated - semantic search unavailable");
|
||||
}
|
||||
if embedding_coverage < 50.0 && stats.total_nodes > 10 {
|
||||
warnings.push("Low embedding coverage - run consolidate to improve semantic search");
|
||||
}
|
||||
|
||||
let mut recommendations = Vec::new();
|
||||
if status == "critical" {
|
||||
recommendations.push("CRITICAL: Many memories have very low retention. Review important memories.");
|
||||
}
|
||||
if stats.nodes_due_for_review > 5 {
|
||||
recommendations.push("Review due memories to strengthen retention.");
|
||||
}
|
||||
if stats.nodes_with_embeddings < stats.total_nodes {
|
||||
recommendations.push("Run 'consolidate' to generate missing embeddings.");
|
||||
}
|
||||
if stats.total_nodes > 100 && stats.average_retention < 0.7 {
|
||||
recommendations.push("Consider running periodic consolidation.");
|
||||
}
|
||||
if status == "healthy" && recommendations.is_empty() {
|
||||
recommendations.push("Memory system is healthy!");
|
||||
}
|
||||
|
||||
// === State distribution ===
|
||||
let nodes = storage_guard.get_all_nodes(500, 0).map_err(|e| e.to_string())?;
|
||||
let total = nodes.len();
|
||||
let (active, dormant, silent, unavailable) = if total > 0 {
|
||||
let mut a = 0usize;
|
||||
let mut d = 0usize;
|
||||
let mut s = 0usize;
|
||||
let mut u = 0usize;
|
||||
for node in &nodes {
|
||||
let accessibility = node.retention_strength * 0.5
|
||||
+ node.retrieval_strength * 0.3
|
||||
+ node.storage_strength * 0.2;
|
||||
if accessibility >= 0.7 {
|
||||
a += 1;
|
||||
} else if accessibility >= 0.4 {
|
||||
d += 1;
|
||||
} else if accessibility >= 0.1 {
|
||||
s += 1;
|
||||
} else {
|
||||
u += 1;
|
||||
}
|
||||
}
|
||||
(a, d, s, u)
|
||||
} else {
|
||||
(0, 0, 0, 0)
|
||||
};
|
||||
|
||||
// === FSRS Preview ===
|
||||
let scheduler = FSRSScheduler::default();
|
||||
let fsrs_preview = if let Some(representative) = nodes.first() {
|
||||
let mut state = scheduler.new_card();
|
||||
state.difficulty = representative.difficulty;
|
||||
state.stability = representative.stability;
|
||||
state.reps = representative.reps;
|
||||
state.lapses = representative.lapses;
|
||||
state.last_review = representative.last_accessed;
|
||||
let elapsed = scheduler.days_since_review(&state.last_review);
|
||||
let preview = scheduler.preview_reviews(&state, elapsed);
|
||||
Some(serde_json::json!({
|
||||
"representativeMemoryId": representative.id,
|
||||
"elapsedDays": format!("{:.1}", elapsed),
|
||||
"intervalIfGood": preview.good.interval,
|
||||
"intervalIfEasy": preview.easy.interval,
|
||||
"intervalIfHard": preview.hard.interval,
|
||||
"currentRetrievability": format!("{:.3}", preview.good.retrievability),
|
||||
}))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
// === Cognitive health ===
|
||||
let cognitive_health = if let Ok(cog) = cognitive.try_lock() {
|
||||
let activation_count = cog.activation_network.get_associations("_probe_").len();
|
||||
let prediction_accuracy = cog.predictive_memory.prediction_accuracy().unwrap_or(0.0);
|
||||
let scheduler_stats = cog.consolidation_scheduler.get_activity_stats();
|
||||
Some(serde_json::json!({
|
||||
"activationNetworkSize": activation_count,
|
||||
"predictionAccuracy": format!("{:.2}", prediction_accuracy),
|
||||
"modulesActive": 28,
|
||||
"schedulerStats": {
|
||||
"totalEvents": scheduler_stats.total_events,
|
||||
"eventsPerMinute": scheduler_stats.events_per_minute,
|
||||
"isIdle": scheduler_stats.is_idle,
|
||||
"timeUntilNextConsolidation": format!("{:?}", cog.consolidation_scheduler.time_until_next()),
|
||||
},
|
||||
}))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
// === Automation triggers (for conditional dream/backup/gc at session start) ===
|
||||
let last_consolidation = storage_guard.get_last_consolidation().ok().flatten();
|
||||
let last_dream = storage_guard.get_last_dream().ok().flatten();
|
||||
let saves_since_last_dream = match &last_dream {
|
||||
Some(dt) => storage_guard.count_memories_since(*dt).unwrap_or(0),
|
||||
None => stats.total_nodes as i64,
|
||||
};
|
||||
let last_backup = Storage::get_last_backup_timestamp();
|
||||
|
||||
drop(storage_guard);
|
||||
|
||||
Ok(serde_json::json!({
|
||||
"tool": "system_status",
|
||||
// Health
|
||||
"status": status,
|
||||
"warnings": warnings,
|
||||
"recommendations": recommendations,
|
||||
"embeddingReady": embedding_ready,
|
||||
// Stats
|
||||
"totalMemories": stats.total_nodes,
|
||||
"dueForReview": stats.nodes_due_for_review,
|
||||
"averageRetention": stats.average_retention,
|
||||
"averageStorageStrength": stats.average_storage_strength,
|
||||
"averageRetrievalStrength": stats.average_retrieval_strength,
|
||||
"withEmbeddings": stats.nodes_with_embeddings,
|
||||
"embeddingCoverage": format!("{:.1}%", embedding_coverage),
|
||||
"embeddingModel": stats.embedding_model,
|
||||
"oldestMemory": stats.oldest_memory.map(|dt| dt.to_rfc3339()),
|
||||
"newestMemory": stats.newest_memory.map(|dt| dt.to_rfc3339()),
|
||||
// Distribution
|
||||
"stateDistribution": {
|
||||
"active": active,
|
||||
"dormant": dormant,
|
||||
"silent": silent,
|
||||
"unavailable": unavailable,
|
||||
"sampled": total,
|
||||
},
|
||||
// FSRS
|
||||
"fsrsPreview": fsrs_preview,
|
||||
// Cognitive
|
||||
"cognitiveHealth": cognitive_health,
|
||||
// Automation triggers — Claude uses these to decide when to dream/backup/gc
|
||||
"automationTriggers": {
|
||||
"lastDreamTimestamp": last_dream.map(|dt| dt.to_rfc3339()),
|
||||
"savesSinceLastDream": saves_since_last_dream,
|
||||
"lastBackupTimestamp": last_backup.map(|dt| dt.to_rfc3339()),
|
||||
"lastConsolidationTimestamp": last_consolidation.map(|dt| dt.to_rfc3339()),
|
||||
},
|
||||
}))
|
||||
}
|
||||
|
||||
/// Health check tool — deprecated in v1.7, use execute_system_status() instead
|
||||
#[allow(dead_code)]
|
||||
pub async fn execute_health_check(
|
||||
storage: &Arc<Mutex<Storage>>,
|
||||
_args: Option<Value>,
|
||||
|
|
@ -193,7 +389,8 @@ pub async fn execute_consolidate(
|
|||
}))
|
||||
}
|
||||
|
||||
/// Stats tool
|
||||
/// Stats tool — deprecated in v1.7, use execute_system_status() instead
|
||||
#[allow(dead_code)]
|
||||
pub async fn execute_stats(
|
||||
storage: &Arc<Mutex<Storage>>,
|
||||
cognitive: &Arc<Mutex<CognitiveEngine>>,
|
||||
|
|
@ -671,3 +868,119 @@ pub async fn execute_gc(
|
|||
"totalAfter": all_nodes.len() - deleted,
|
||||
}))
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// TESTS
|
||||
// ============================================================================
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::cognitive::CognitiveEngine;
|
||||
use tempfile::TempDir;
|
||||
|
||||
fn test_cognitive() -> Arc<Mutex<CognitiveEngine>> {
|
||||
Arc::new(Mutex::new(CognitiveEngine::new()))
|
||||
}
|
||||
|
||||
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
|
||||
(Arc::new(Mutex::new(storage)), dir)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_status_schema() {
|
||||
let schema = system_status_schema();
|
||||
assert_eq!(schema["type"], "object");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_system_status_empty_db() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute_system_status(&storage, &test_cognitive(), None).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["tool"], "system_status");
|
||||
assert_eq!(value["status"], "empty");
|
||||
assert_eq!(value["totalMemories"], 0);
|
||||
assert!(value["warnings"].is_array());
|
||||
assert!(value["recommendations"].is_array());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_system_status_with_memories() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
{
|
||||
let mut s = storage.lock().await;
|
||||
s.ingest(vestige_core::IngestInput {
|
||||
content: "Test memory for status".to_string(),
|
||||
node_type: "fact".to_string(),
|
||||
source: None,
|
||||
sentiment_score: 0.0,
|
||||
sentiment_magnitude: 0.0,
|
||||
tags: vec![],
|
||||
valid_from: None,
|
||||
valid_until: None,
|
||||
}).unwrap();
|
||||
}
|
||||
let result = execute_system_status(&storage, &test_cognitive(), None).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["totalMemories"], 1);
|
||||
assert!(value["stateDistribution"].is_object());
|
||||
assert!(value["embeddingCoverage"].is_string());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_system_status_has_cognitive_health() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute_system_status(&storage, &test_cognitive(), None).await;
|
||||
let value = result.unwrap();
|
||||
assert!(value["cognitiveHealth"].is_object());
|
||||
assert_eq!(value["cognitiveHealth"]["modulesActive"], 28);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_system_status_has_automation_triggers() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute_system_status(&storage, &test_cognitive(), None).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
|
||||
let triggers = &value["automationTriggers"];
|
||||
assert!(triggers.is_object(), "automationTriggers should be present");
|
||||
assert!(triggers["lastDreamTimestamp"].is_null(), "No dreams yet");
|
||||
assert_eq!(triggers["savesSinceLastDream"], 0, "Empty DB = 0 saves");
|
||||
assert!(triggers["lastConsolidationTimestamp"].is_null(), "No consolidation yet");
|
||||
// lastBackupTimestamp depends on filesystem state, just check it exists
|
||||
assert!(triggers.get("lastBackupTimestamp").is_some());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_system_status_automation_triggers_with_memories() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
{
|
||||
let mut s = storage.lock().await;
|
||||
for i in 0..3 {
|
||||
s.ingest(vestige_core::IngestInput {
|
||||
content: format!("Automation trigger test memory {}", i),
|
||||
node_type: "fact".to_string(),
|
||||
source: None,
|
||||
sentiment_score: 0.0,
|
||||
sentiment_magnitude: 0.0,
|
||||
tags: vec![],
|
||||
valid_from: None,
|
||||
valid_until: None,
|
||||
}).unwrap();
|
||||
}
|
||||
}
|
||||
let result = execute_system_status(&storage, &test_cognitive(), None).await;
|
||||
let value = result.unwrap();
|
||||
|
||||
let triggers = &value["automationTriggers"];
|
||||
// No dream ever → savesSinceLastDream == totalMemories
|
||||
assert_eq!(triggers["savesSinceLastDream"], 3);
|
||||
assert!(triggers["lastDreamTimestamp"].is_null());
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,7 +8,8 @@ use serde_json::Value;
|
|||
use std::sync::Arc;
|
||||
use tokio::sync::Mutex;
|
||||
|
||||
use vestige_core::{MemoryState, Storage};
|
||||
use crate::cognitive::CognitiveEngine;
|
||||
use vestige_core::{MemoryState, Modification, OutcomeType, Storage};
|
||||
|
||||
// Accessibility thresholds based on retention strength
|
||||
const ACCESSIBILITY_ACTIVE: f64 = 0.7;
|
||||
|
|
@ -42,12 +43,16 @@ pub fn schema() -> Value {
|
|||
"properties": {
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["get", "delete", "state"],
|
||||
"description": "Action to perform: 'get' retrieves full memory node, 'delete' removes memory, 'state' returns accessibility state"
|
||||
"enum": ["get", "delete", "state", "promote", "demote"],
|
||||
"description": "Action to perform: 'get' retrieves full memory node, 'delete' removes memory, 'state' returns accessibility state, 'promote' increases retrieval strength (thumbs up), 'demote' decreases retrieval strength (thumbs down)"
|
||||
},
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "The ID of the memory node"
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"description": "Why this memory is being promoted/demoted (optional, for logging). Only used with promote/demote actions."
|
||||
}
|
||||
},
|
||||
"required": ["action", "id"]
|
||||
|
|
@ -59,11 +64,13 @@ pub fn schema() -> Value {
|
|||
struct MemoryArgs {
|
||||
action: String,
|
||||
id: String,
|
||||
reason: Option<String>,
|
||||
}
|
||||
|
||||
/// Execute the unified memory tool
|
||||
pub async fn execute(
|
||||
storage: &Arc<Mutex<Storage>>,
|
||||
cognitive: &Arc<Mutex<CognitiveEngine>>,
|
||||
args: Option<Value>,
|
||||
) -> Result<Value, String> {
|
||||
let args: MemoryArgs = match args {
|
||||
|
|
@ -78,8 +85,10 @@ pub async fn execute(
|
|||
"get" => execute_get(storage, &args.id).await,
|
||||
"delete" => execute_delete(storage, &args.id).await,
|
||||
"state" => execute_state(storage, &args.id).await,
|
||||
"promote" => execute_promote(storage, cognitive, &args.id, args.reason).await,
|
||||
"demote" => execute_demote(storage, cognitive, &args.id, args.reason).await,
|
||||
_ => Err(format!(
|
||||
"Invalid action '{}'. Must be one of: get, delete, state",
|
||||
"Invalid action '{}'. Must be one of: get, delete, state, promote, demote",
|
||||
args.action
|
||||
)),
|
||||
}
|
||||
|
|
@ -186,6 +195,120 @@ async fn execute_state(storage: &Arc<Mutex<Storage>>, id: &str) -> Result<Value,
|
|||
}))
|
||||
}
|
||||
|
||||
/// Promote a memory (thumbs up) — increases retrieval strength with cognitive feedback pipeline
|
||||
async fn execute_promote(
|
||||
storage: &Arc<Mutex<Storage>>,
|
||||
cognitive: &Arc<Mutex<CognitiveEngine>>,
|
||||
id: &str,
|
||||
reason: Option<String>,
|
||||
) -> Result<Value, String> {
|
||||
let storage_guard = storage.lock().await;
|
||||
|
||||
let before = storage_guard.get_node(id).map_err(|e| e.to_string())?
|
||||
.ok_or_else(|| format!("Node not found: {}", id))?;
|
||||
|
||||
let node = storage_guard.promote_memory(id).map_err(|e| e.to_string())?;
|
||||
drop(storage_guard);
|
||||
|
||||
// Cognitive feedback pipeline
|
||||
if let Ok(mut cog) = cognitive.try_lock() {
|
||||
cog.reward_signal.record_outcome(id, OutcomeType::Helpful);
|
||||
cog.importance_tracker.on_retrieved(id, true);
|
||||
if cog.reconsolidation.is_labile(id) {
|
||||
cog.reconsolidation.apply_modification(
|
||||
id,
|
||||
Modification::StrengthenConnection {
|
||||
target_memory_id: id.to_string(),
|
||||
boost: 0.2,
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(serde_json::json!({
|
||||
"success": true,
|
||||
"action": "promoted",
|
||||
"nodeId": node.id,
|
||||
"reason": reason,
|
||||
"changes": {
|
||||
"retrievalStrength": {
|
||||
"before": before.retrieval_strength,
|
||||
"after": node.retrieval_strength,
|
||||
"delta": "+0.20"
|
||||
},
|
||||
"retentionStrength": {
|
||||
"before": before.retention_strength,
|
||||
"after": node.retention_strength,
|
||||
"delta": "+0.10"
|
||||
},
|
||||
"stability": {
|
||||
"before": before.stability,
|
||||
"after": node.stability,
|
||||
"multiplier": "1.5x"
|
||||
}
|
||||
},
|
||||
"message": format!("Memory promoted. It will now surface more often in searches. Retrieval: {:.2} -> {:.2}",
|
||||
before.retrieval_strength, node.retrieval_strength),
|
||||
}))
|
||||
}
|
||||
|
||||
/// Demote a memory (thumbs down) — decreases retrieval strength with cognitive feedback pipeline
|
||||
async fn execute_demote(
|
||||
storage: &Arc<Mutex<Storage>>,
|
||||
cognitive: &Arc<Mutex<CognitiveEngine>>,
|
||||
id: &str,
|
||||
reason: Option<String>,
|
||||
) -> Result<Value, String> {
|
||||
let storage_guard = storage.lock().await;
|
||||
|
||||
let before = storage_guard.get_node(id).map_err(|e| e.to_string())?
|
||||
.ok_or_else(|| format!("Node not found: {}", id))?;
|
||||
|
||||
let node = storage_guard.demote_memory(id).map_err(|e| e.to_string())?;
|
||||
drop(storage_guard);
|
||||
|
||||
// Cognitive feedback pipeline
|
||||
if let Ok(mut cog) = cognitive.try_lock() {
|
||||
cog.reward_signal.record_outcome(id, OutcomeType::NotHelpful);
|
||||
cog.importance_tracker.on_retrieved(id, false);
|
||||
if cog.reconsolidation.is_labile(id) {
|
||||
cog.reconsolidation.apply_modification(
|
||||
id,
|
||||
Modification::AddContext {
|
||||
context: "User reported this memory was wrong/unhelpful".to_string(),
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(serde_json::json!({
|
||||
"success": true,
|
||||
"action": "demoted",
|
||||
"nodeId": node.id,
|
||||
"reason": reason,
|
||||
"changes": {
|
||||
"retrievalStrength": {
|
||||
"before": before.retrieval_strength,
|
||||
"after": node.retrieval_strength,
|
||||
"delta": "-0.30"
|
||||
},
|
||||
"retentionStrength": {
|
||||
"before": before.retention_strength,
|
||||
"after": node.retention_strength,
|
||||
"delta": "-0.15"
|
||||
},
|
||||
"stability": {
|
||||
"before": before.stability,
|
||||
"after": node.stability,
|
||||
"multiplier": "0.5x"
|
||||
}
|
||||
},
|
||||
"message": format!("Memory demoted. Better alternatives will now surface instead. Retrieval: {:.2} -> {:.2}",
|
||||
before.retrieval_strength, node.retrieval_strength),
|
||||
"note": "Memory is NOT deleted - it remains searchable but ranks lower."
|
||||
}))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
|
@ -218,11 +341,21 @@ mod tests {
|
|||
let schema = schema();
|
||||
assert!(schema["properties"]["action"].is_object());
|
||||
assert!(schema["properties"]["id"].is_object());
|
||||
assert!(schema["properties"]["reason"].is_object());
|
||||
assert_eq!(schema["required"], serde_json::json!(["action", "id"]));
|
||||
// Verify all 5 actions are in enum
|
||||
let actions = schema["properties"]["action"]["enum"].as_array().unwrap();
|
||||
assert_eq!(actions.len(), 5);
|
||||
assert!(actions.contains(&serde_json::json!("promote")));
|
||||
assert!(actions.contains(&serde_json::json!("demote")));
|
||||
}
|
||||
|
||||
// === INTEGRATION TESTS ===
|
||||
|
||||
fn test_cognitive() -> Arc<Mutex<CognitiveEngine>> {
|
||||
Arc::new(Mutex::new(CognitiveEngine::new()))
|
||||
}
|
||||
|
||||
async fn test_storage() -> (Arc<Mutex<Storage>>, tempfile::TempDir) {
|
||||
let dir = tempfile::TempDir::new().unwrap();
|
||||
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
|
||||
|
|
@ -249,7 +382,7 @@ mod tests {
|
|||
#[tokio::test]
|
||||
async fn test_missing_args_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(&storage, None).await;
|
||||
let result = execute(&storage, &test_cognitive(), None).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Missing arguments"));
|
||||
}
|
||||
|
|
@ -258,7 +391,7 @@ mod tests {
|
|||
async fn test_invalid_action_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "action": "invalid", "id": "00000000-0000-0000-0000-000000000000" });
|
||||
let result = execute(&storage, Some(args)).await;
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Invalid action"));
|
||||
}
|
||||
|
|
@ -267,7 +400,7 @@ mod tests {
|
|||
async fn test_invalid_uuid_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "action": "get", "id": "not-a-uuid" });
|
||||
let result = execute(&storage, Some(args)).await;
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Invalid memory ID format"));
|
||||
}
|
||||
|
|
@ -277,7 +410,7 @@ mod tests {
|
|||
let (storage, _dir) = test_storage().await;
|
||||
let id = ingest_memory(&storage).await;
|
||||
let args = serde_json::json!({ "action": "get", "id": id });
|
||||
let result = execute(&storage, Some(args)).await;
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["action"], "get");
|
||||
|
|
@ -293,7 +426,7 @@ mod tests {
|
|||
async fn test_get_nonexistent_memory() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "action": "get", "id": "00000000-0000-0000-0000-000000000000" });
|
||||
let result = execute(&storage, Some(args)).await;
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["found"], false);
|
||||
|
|
@ -305,7 +438,7 @@ mod tests {
|
|||
let (storage, _dir) = test_storage().await;
|
||||
let id = ingest_memory(&storage).await;
|
||||
let args = serde_json::json!({ "action": "delete", "id": id });
|
||||
let result = execute(&storage, Some(args)).await;
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["action"], "delete");
|
||||
|
|
@ -316,7 +449,7 @@ mod tests {
|
|||
async fn test_delete_nonexistent_memory() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "action": "delete", "id": "00000000-0000-0000-0000-000000000000" });
|
||||
let result = execute(&storage, Some(args)).await;
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["success"], false);
|
||||
|
|
@ -328,9 +461,9 @@ mod tests {
|
|||
let (storage, _dir) = test_storage().await;
|
||||
let id = ingest_memory(&storage).await;
|
||||
let del_args = serde_json::json!({ "action": "delete", "id": id });
|
||||
execute(&storage, Some(del_args)).await.unwrap();
|
||||
execute(&storage, &test_cognitive(), Some(del_args)).await.unwrap();
|
||||
let get_args = serde_json::json!({ "action": "get", "id": id });
|
||||
let result = execute(&storage, Some(get_args)).await;
|
||||
let result = execute(&storage, &test_cognitive(), Some(get_args)).await;
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["found"], false);
|
||||
}
|
||||
|
|
@ -340,7 +473,7 @@ mod tests {
|
|||
let (storage, _dir) = test_storage().await;
|
||||
let id = ingest_memory(&storage).await;
|
||||
let args = serde_json::json!({ "action": "state", "id": id });
|
||||
let result = execute(&storage, Some(args)).await;
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["action"], "state");
|
||||
|
|
@ -360,14 +493,13 @@ mod tests {
|
|||
async fn test_state_nonexistent_memory_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "action": "state", "id": "00000000-0000-0000-0000-000000000000" });
|
||||
let result = execute(&storage, Some(args)).await;
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("not found"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_accessibility_boundary_active() {
|
||||
// Exactly at active threshold
|
||||
let a = compute_accessibility(1.0, 0.7, 0.5);
|
||||
assert!(a >= ACCESSIBILITY_ACTIVE);
|
||||
assert!(matches!(state_from_accessibility(a), MemoryState::Active));
|
||||
|
|
@ -379,4 +511,114 @@ mod tests {
|
|||
assert_eq!(a, 0.0);
|
||||
assert!(matches!(state_from_accessibility(a), MemoryState::Unavailable));
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// PROMOTE/DEMOTE TESTS (ported from feedback.rs, v1.7.0 merge)
|
||||
// ========================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_promote_missing_id_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "action": "promote", "id": "not-a-uuid" });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Invalid memory ID format"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_promote_nonexistent_node_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "action": "promote", "id": "00000000-0000-0000-0000-000000000000" });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Node not found"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_promote_succeeds() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let id = ingest_memory(&storage).await;
|
||||
let args = serde_json::json!({ "action": "promote", "id": id, "reason": "It was helpful" });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["success"], true);
|
||||
assert_eq!(value["action"], "promoted");
|
||||
assert_eq!(value["nodeId"], id);
|
||||
assert_eq!(value["reason"], "It was helpful");
|
||||
assert!(value["changes"]["retrievalStrength"].is_object());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_promote_without_reason_succeeds() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let id = ingest_memory(&storage).await;
|
||||
let args = serde_json::json!({ "action": "promote", "id": id });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["success"], true);
|
||||
assert!(value["reason"].is_null());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_promote_changes_contain_expected_fields() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let id = ingest_memory(&storage).await;
|
||||
let args = serde_json::json!({ "action": "promote", "id": id });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
let value = result.unwrap();
|
||||
assert!(value["changes"]["retrievalStrength"]["before"].is_number());
|
||||
assert!(value["changes"]["retrievalStrength"]["after"].is_number());
|
||||
assert_eq!(value["changes"]["retrievalStrength"]["delta"], "+0.20");
|
||||
assert!(value["changes"]["retentionStrength"]["before"].is_number());
|
||||
assert_eq!(value["changes"]["retentionStrength"]["delta"], "+0.10");
|
||||
assert_eq!(value["changes"]["stability"]["multiplier"], "1.5x");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_demote_invalid_uuid_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "action": "demote", "id": "bad-id" });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Invalid memory ID format"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_demote_nonexistent_node_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "action": "demote", "id": "00000000-0000-0000-0000-000000000000" });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Node not found"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_demote_succeeds() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let id = ingest_memory(&storage).await;
|
||||
let args = serde_json::json!({ "action": "demote", "id": id, "reason": "It was wrong" });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["success"], true);
|
||||
assert_eq!(value["action"], "demoted");
|
||||
assert_eq!(value["nodeId"], id);
|
||||
assert_eq!(value["reason"], "It was wrong");
|
||||
assert!(value["note"].as_str().unwrap().contains("NOT deleted"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_demote_changes_contain_expected_fields() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let id = ingest_memory(&storage).await;
|
||||
let args = serde_json::json!({ "action": "demote", "id": id });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
let value = result.unwrap();
|
||||
assert!(value["changes"]["retrievalStrength"]["before"].is_number());
|
||||
assert_eq!(value["changes"]["retrievalStrength"]["delta"], "-0.30");
|
||||
assert_eq!(value["changes"]["retentionStrength"]["delta"], "-0.15");
|
||||
assert_eq!(value["changes"]["stability"]["multiplier"], "0.5x");
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,7 +8,6 @@
|
|||
|
||||
// Active unified tools
|
||||
pub mod codebase_unified;
|
||||
pub mod ingest;
|
||||
pub mod intention_unified;
|
||||
pub mod memory_unified;
|
||||
pub mod search_unified;
|
||||
|
|
@ -22,7 +21,6 @@ pub mod timeline;
|
|||
pub mod maintenance;
|
||||
|
||||
// v1.3: Auto-save and dedup tools
|
||||
pub mod checkpoint;
|
||||
pub mod dedup;
|
||||
pub mod importance;
|
||||
|
||||
|
|
@ -35,6 +33,8 @@ pub mod restore;
|
|||
// Deprecated tools - kept for internal backwards compatibility
|
||||
// These modules are intentionally unused in the public API
|
||||
#[allow(dead_code)]
|
||||
pub mod checkpoint;
|
||||
#[allow(dead_code)]
|
||||
pub mod codebase;
|
||||
#[allow(dead_code)]
|
||||
pub mod consolidate;
|
||||
|
|
@ -43,6 +43,8 @@ pub mod context;
|
|||
#[allow(dead_code)]
|
||||
pub mod feedback;
|
||||
#[allow(dead_code)]
|
||||
pub mod ingest;
|
||||
#[allow(dead_code)]
|
||||
pub mod intentions;
|
||||
#[allow(dead_code)]
|
||||
pub mod knowledge;
|
||||
|
|
|
|||
|
|
@ -26,13 +26,17 @@ use vestige_core::{
|
|||
};
|
||||
|
||||
/// Input schema for smart_ingest tool
|
||||
///
|
||||
/// Supports two modes:
|
||||
/// - **Single mode**: provide `content` (required) + optional fields
|
||||
/// - **Batch mode**: provide `items` array (max 20), each with full cognitive pipeline
|
||||
pub fn schema() -> Value {
|
||||
serde_json::json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"content": {
|
||||
"type": "string",
|
||||
"description": "The content to remember. Will be compared against existing memories."
|
||||
"description": "The content to remember. Will be compared against existing memories. (Single mode)"
|
||||
},
|
||||
"node_type": {
|
||||
"type": "string",
|
||||
|
|
@ -52,20 +56,61 @@ pub fn schema() -> Value {
|
|||
"type": "boolean",
|
||||
"description": "Force creation of a new memory even if similar content exists",
|
||||
"default": false
|
||||
},
|
||||
"items": {
|
||||
"type": "array",
|
||||
"description": "Batch mode: array of items to save (max 20). Each runs through full cognitive pipeline with Prediction Error Gating. Use at session end or before context compaction.",
|
||||
"maxItems": 20,
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"content": {
|
||||
"type": "string",
|
||||
"description": "The content to remember"
|
||||
},
|
||||
"tags": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Tags for categorization"
|
||||
},
|
||||
"node_type": {
|
||||
"type": "string",
|
||||
"description": "Type: fact, concept, event, person, place, note, pattern, decision",
|
||||
"default": "fact"
|
||||
},
|
||||
"source": {
|
||||
"type": "string",
|
||||
"description": "Source reference"
|
||||
}
|
||||
},
|
||||
"required": ["content"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"required": ["content"]
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct SmartIngestArgs {
|
||||
content: String,
|
||||
content: Option<String>,
|
||||
#[serde(alias = "node_type")]
|
||||
node_type: Option<String>,
|
||||
tags: Option<Vec<String>>,
|
||||
source: Option<String>,
|
||||
force_create: Option<bool>,
|
||||
items: Option<Vec<BatchItem>>,
|
||||
}
|
||||
|
||||
/// A single item in batch mode
|
||||
#[derive(Debug, Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct BatchItem {
|
||||
content: String,
|
||||
tags: Option<Vec<String>>,
|
||||
#[serde(alias = "node_type")]
|
||||
node_type: Option<String>,
|
||||
source: Option<String>,
|
||||
}
|
||||
|
||||
pub async fn execute(
|
||||
|
|
@ -78,12 +123,20 @@ pub async fn execute(
|
|||
None => return Err("Missing arguments".to_string()),
|
||||
};
|
||||
|
||||
// Detect mode: batch (items present) vs single (content present)
|
||||
if let Some(items) = args.items {
|
||||
return execute_batch(storage, cognitive, items).await;
|
||||
}
|
||||
|
||||
// Single mode: content is required
|
||||
let content = args.content.ok_or("Missing 'content' field. Provide 'content' for single mode or 'items' for batch mode.")?;
|
||||
|
||||
// Validate content
|
||||
if args.content.trim().is_empty() {
|
||||
if content.trim().is_empty() {
|
||||
return Err("Content cannot be empty".to_string());
|
||||
}
|
||||
|
||||
if args.content.len() > 1_000_000 {
|
||||
if content.len() > 1_000_000 {
|
||||
return Err("Content too large (max 1MB)".to_string());
|
||||
}
|
||||
|
||||
|
|
@ -96,7 +149,7 @@ pub async fn execute(
|
|||
if let Ok(cog) = cognitive.try_lock() {
|
||||
// 4A. Full 4-channel importance scoring
|
||||
let context = ImportanceContext::current();
|
||||
let importance = cog.importance_signals.compute_importance(&args.content, &context);
|
||||
let importance = cog.importance_signals.compute_importance(&content, &context);
|
||||
importance_composite = importance.composite;
|
||||
|
||||
// 4B. Intent detection → auto-tag
|
||||
|
|
@ -113,11 +166,11 @@ pub async fn execute(
|
|||
}
|
||||
|
||||
// 4D. Adaptive embedding — detect content type for logging
|
||||
let _content_type = ContentType::detect(&args.content);
|
||||
let _content_type = ContentType::detect(&content);
|
||||
}
|
||||
|
||||
let input = IngestInput {
|
||||
content: args.content.clone(),
|
||||
content: content.clone(),
|
||||
node_type: args.node_type.unwrap_or_else(|| "fact".to_string()),
|
||||
source: args.source,
|
||||
sentiment_score: 0.0,
|
||||
|
|
@ -217,6 +270,181 @@ pub async fn execute(
|
|||
}
|
||||
}
|
||||
|
||||
/// Execute batch mode: process up to 20 items, each with full cognitive pipeline.
|
||||
///
|
||||
/// Unlike the old `session_checkpoint` tool, batch mode runs the full cognitive
|
||||
/// pre-ingest (importance scoring, intent detection) and post-ingest (synaptic
|
||||
/// tagging, novelty update, hippocampal indexing) pipelines per item.
|
||||
async fn execute_batch(
|
||||
storage: &Arc<Mutex<Storage>>,
|
||||
cognitive: &Arc<Mutex<CognitiveEngine>>,
|
||||
items: Vec<BatchItem>,
|
||||
) -> Result<Value, String> {
|
||||
if items.is_empty() {
|
||||
return Err("Items array cannot be empty".to_string());
|
||||
}
|
||||
if items.len() > 20 {
|
||||
return Err("Maximum 20 items per batch".to_string());
|
||||
}
|
||||
|
||||
let mut results = Vec::new();
|
||||
let mut created = 0u32;
|
||||
let mut updated = 0u32;
|
||||
let mut skipped = 0u32;
|
||||
let mut errors = 0u32;
|
||||
|
||||
for (i, item) in items.into_iter().enumerate() {
|
||||
// Skip empty content
|
||||
if item.content.trim().is_empty() {
|
||||
results.push(serde_json::json!({
|
||||
"index": i,
|
||||
"status": "skipped",
|
||||
"reason": "Empty content"
|
||||
}));
|
||||
skipped += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip content > 1MB
|
||||
if item.content.len() > 1_000_000 {
|
||||
results.push(serde_json::json!({
|
||||
"index": i,
|
||||
"status": "skipped",
|
||||
"reason": "Content too large (max 1MB)"
|
||||
}));
|
||||
skipped += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
// ================================================================
|
||||
// COGNITIVE PRE-INGEST (per item)
|
||||
// ================================================================
|
||||
let mut importance_composite = 0.0_f64;
|
||||
let mut tags = item.tags.unwrap_or_default();
|
||||
|
||||
if let Ok(cog) = cognitive.try_lock() {
|
||||
let context = ImportanceContext::current();
|
||||
let importance = cog.importance_signals.compute_importance(&item.content, &context);
|
||||
importance_composite = importance.composite;
|
||||
|
||||
let intent_result = cog.intent_detector.detect_intent();
|
||||
if intent_result.confidence > 0.5 {
|
||||
let intent_tag = format!("intent:{:?}", intent_result.primary_intent);
|
||||
let intent_tag = if intent_tag.len() > 50 {
|
||||
format!("{}...", &intent_tag[..47])
|
||||
} else {
|
||||
intent_tag
|
||||
};
|
||||
tags.push(intent_tag);
|
||||
}
|
||||
|
||||
let _content_type = ContentType::detect(&item.content);
|
||||
}
|
||||
|
||||
let input = IngestInput {
|
||||
content: item.content.clone(),
|
||||
node_type: item.node_type.unwrap_or_else(|| "fact".to_string()),
|
||||
source: item.source,
|
||||
sentiment_score: 0.0,
|
||||
sentiment_magnitude: importance_composite,
|
||||
tags,
|
||||
valid_from: None,
|
||||
valid_until: None,
|
||||
};
|
||||
|
||||
// ================================================================
|
||||
// INGEST (storage lock per item)
|
||||
// ================================================================
|
||||
let mut storage_guard = storage.lock().await;
|
||||
|
||||
#[cfg(all(feature = "embeddings", feature = "vector-search"))]
|
||||
{
|
||||
match storage_guard.smart_ingest(input) {
|
||||
Ok(result) => {
|
||||
let node_id = result.node.id.clone();
|
||||
let node_content = result.node.content.clone();
|
||||
let node_type = result.node.node_type.clone();
|
||||
drop(storage_guard);
|
||||
|
||||
match result.decision.as_str() {
|
||||
"create" | "supersede" | "replace" => created += 1,
|
||||
"update" | "reinforce" | "merge" | "add_context" => updated += 1,
|
||||
_ => created += 1,
|
||||
}
|
||||
|
||||
// Post-ingest cognitive side effects
|
||||
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
|
||||
|
||||
results.push(serde_json::json!({
|
||||
"index": i,
|
||||
"status": "saved",
|
||||
"decision": result.decision,
|
||||
"nodeId": node_id,
|
||||
"similarity": result.similarity,
|
||||
"importanceScore": importance_composite,
|
||||
"reason": result.reason
|
||||
}));
|
||||
}
|
||||
Err(e) => {
|
||||
drop(storage_guard);
|
||||
errors += 1;
|
||||
results.push(serde_json::json!({
|
||||
"index": i,
|
||||
"status": "error",
|
||||
"reason": e.to_string()
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(all(feature = "embeddings", feature = "vector-search")))]
|
||||
{
|
||||
match storage_guard.ingest(input) {
|
||||
Ok(node) => {
|
||||
let node_id = node.id.clone();
|
||||
let node_content = node.content.clone();
|
||||
let node_type = node.node_type.clone();
|
||||
drop(storage_guard);
|
||||
|
||||
created += 1;
|
||||
run_post_ingest(cognitive, &node_id, &node_content, &node_type, importance_composite);
|
||||
|
||||
results.push(serde_json::json!({
|
||||
"index": i,
|
||||
"status": "saved",
|
||||
"decision": "create",
|
||||
"nodeId": node_id,
|
||||
"importanceScore": importance_composite,
|
||||
"reason": "Embeddings not available - used regular ingest"
|
||||
}));
|
||||
}
|
||||
Err(e) => {
|
||||
drop(storage_guard);
|
||||
errors += 1;
|
||||
results.push(serde_json::json!({
|
||||
"index": i,
|
||||
"status": "error",
|
||||
"reason": e.to_string()
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(serde_json::json!({
|
||||
"success": errors == 0,
|
||||
"mode": "batch",
|
||||
"summary": {
|
||||
"total": results.len(),
|
||||
"created": created,
|
||||
"updated": updated,
|
||||
"skipped": skipped,
|
||||
"errors": errors
|
||||
},
|
||||
"results": results
|
||||
}))
|
||||
}
|
||||
|
||||
/// Cognitive post-ingest side effects: synaptic tagging, novelty update, hippocampal indexing.
|
||||
///
|
||||
/// Uses try_lock() for non-blocking access. If cognitive is locked, side effects are skipped.
|
||||
|
|
@ -323,7 +551,9 @@ mod tests {
|
|||
assert_eq!(schema_value["type"], "object");
|
||||
assert!(schema_value["properties"]["content"].is_object());
|
||||
assert!(schema_value["properties"]["forceCreate"].is_object());
|
||||
assert!(schema_value["required"].as_array().unwrap().contains(&serde_json::json!("content")));
|
||||
assert!(schema_value["properties"]["items"].is_object());
|
||||
// v1.7: no top-level required — content for single mode, items for batch mode
|
||||
assert!(schema_value.get("required").is_none() || schema_value["required"].is_null());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
|
|
@ -402,6 +632,253 @@ mod tests {
|
|||
let args = serde_json::json!({ "tags": ["test"] });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Invalid arguments"));
|
||||
assert!(result.unwrap_err().contains("content"));
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// TESTS PORTED FROM ingest.rs (v1.7.0 merge)
|
||||
// ========================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_smart_ingest_with_all_optional_fields() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({
|
||||
"content": "Complex memory with all metadata.",
|
||||
"node_type": "decision",
|
||||
"tags": ["architecture", "design"],
|
||||
"source": "team meeting notes"
|
||||
});
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["success"], true);
|
||||
assert!(value["nodeId"].is_string());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_smart_ingest_default_node_type_is_fact() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "content": "Default type test content." });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let node_id = result.unwrap()["nodeId"].as_str().unwrap().to_string();
|
||||
let storage_lock = storage.lock().await;
|
||||
let node = storage_lock.get_node(&node_id).unwrap().unwrap();
|
||||
assert_eq!(node.node_type, "fact");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_schema_has_optional_fields() {
|
||||
let schema_value = schema();
|
||||
assert!(schema_value["properties"]["node_type"].is_object());
|
||||
assert!(schema_value["properties"]["tags"].is_object());
|
||||
assert!(schema_value["properties"]["source"].is_object());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_smart_ingest_with_source() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({
|
||||
"content": "MCP protocol version 2024-11-05 is the current standard.",
|
||||
"source": "https://modelcontextprotocol.io/spec"
|
||||
});
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["success"], true);
|
||||
}
|
||||
|
||||
// ========================================================================
|
||||
// BATCH MODE TESTS (ported from checkpoint.rs, v1.7.0 merge)
|
||||
// ========================================================================
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_empty_items_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(&storage, &test_cognitive(), Some(serde_json::json!({ "items": [] }))).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("empty"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_ingest() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(
|
||||
&storage, &test_cognitive(),
|
||||
Some(serde_json::json!({
|
||||
"items": [
|
||||
{ "content": "First batch item", "tags": ["test"] },
|
||||
{ "content": "Second batch item", "tags": ["test"] }
|
||||
]
|
||||
})),
|
||||
).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["mode"], "batch");
|
||||
assert_eq!(value["summary"]["total"], 2);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_skips_empty_content() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(
|
||||
&storage, &test_cognitive(),
|
||||
Some(serde_json::json!({
|
||||
"items": [
|
||||
{ "content": "Valid item" },
|
||||
{ "content": "" },
|
||||
{ "content": "Another valid item" }
|
||||
]
|
||||
})),
|
||||
).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["summary"]["skipped"], 1);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_missing_args_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(&storage, &test_cognitive(), None).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Missing arguments"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_exceeds_20_items_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let items: Vec<serde_json::Value> = (0..21)
|
||||
.map(|i| serde_json::json!({ "content": format!("Item {}", i) }))
|
||||
.collect();
|
||||
let result = execute(&storage, &test_cognitive(), Some(serde_json::json!({ "items": items }))).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Maximum 20 items"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_exactly_20_items_succeeds() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let items: Vec<serde_json::Value> = (0..20)
|
||||
.map(|i| serde_json::json!({ "content": format!("Item {}", i) }))
|
||||
.collect();
|
||||
let result = execute(&storage, &test_cognitive(), Some(serde_json::json!({ "items": items }))).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["summary"]["total"], 20);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_skips_whitespace_only_content() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(
|
||||
&storage, &test_cognitive(),
|
||||
Some(serde_json::json!({
|
||||
"items": [
|
||||
{ "content": " \t\n " },
|
||||
{ "content": "Valid content" }
|
||||
]
|
||||
})),
|
||||
).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["summary"]["skipped"], 1);
|
||||
assert_eq!(value["summary"]["created"], 1);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_single_item_succeeds() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(
|
||||
&storage, &test_cognitive(),
|
||||
Some(serde_json::json!({
|
||||
"items": [{ "content": "Single item" }]
|
||||
})),
|
||||
).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["summary"]["total"], 1);
|
||||
assert_eq!(value["success"], true);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_items_with_all_fields() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(
|
||||
&storage, &test_cognitive(),
|
||||
Some(serde_json::json!({
|
||||
"items": [{
|
||||
"content": "Full fields item",
|
||||
"tags": ["test", "batch"],
|
||||
"node_type": "decision",
|
||||
"source": "test-suite"
|
||||
}]
|
||||
})),
|
||||
).await;
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["summary"]["created"], 1);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_results_array_matches_items() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(
|
||||
&storage, &test_cognitive(),
|
||||
Some(serde_json::json!({
|
||||
"items": [
|
||||
{ "content": "First" },
|
||||
{ "content": "" },
|
||||
{ "content": "Third" }
|
||||
]
|
||||
})),
|
||||
).await;
|
||||
let value = result.unwrap();
|
||||
let results = value["results"].as_array().unwrap();
|
||||
assert_eq!(results.len(), 3);
|
||||
assert_eq!(results[0]["index"], 0);
|
||||
assert_eq!(results[1]["index"], 1);
|
||||
assert_eq!(results[1]["status"], "skipped");
|
||||
assert_eq!(results[2]["index"], 2);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_success_true_when_only_skipped() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(
|
||||
&storage, &test_cognitive(),
|
||||
Some(serde_json::json!({
|
||||
"items": [
|
||||
{ "content": "" },
|
||||
{ "content": " " }
|
||||
]
|
||||
})),
|
||||
).await;
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["success"], true); // skipped ≠ errors
|
||||
assert_eq!(value["summary"]["errors"], 0);
|
||||
assert_eq!(value["summary"]["skipped"], 2);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_batch_has_importance_scores() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let result = execute(
|
||||
&storage, &test_cognitive(),
|
||||
Some(serde_json::json!({
|
||||
"items": [{ "content": "Important batch memory content" }]
|
||||
})),
|
||||
).await;
|
||||
let value = result.unwrap();
|
||||
let results = value["results"].as_array().unwrap();
|
||||
assert!(results[0]["importanceScore"].is_number());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_no_content_no_items_fails() {
|
||||
let (storage, _dir) = test_storage().await;
|
||||
let args = serde_json::json!({ "tags": ["orphan"] });
|
||||
let result = execute(&storage, &test_cognitive(), Some(args)).await;
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("content"));
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -115,7 +115,7 @@ It remembers.
|
|||
|
||||
## Important: Tool Limit
|
||||
|
||||
Windsurf has a **hard cap of 100 tools** across all MCP servers. Vestige uses ~19 tools, leaving plenty of room for other servers.
|
||||
Windsurf has a **hard cap of 100 tools** across all MCP servers. Vestige uses ~18 tools, leaving plenty of room for other servers.
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ Quit Xcode completely (Cmd+Q) and reopen your project.
|
|||
|
||||
### 4. Verify
|
||||
|
||||
Type `/context` in the Agent panel. You should see `vestige` listed with 23 tools.
|
||||
Type `/context` in the Agent panel. You should see `vestige` listed with 18 tools.
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -254,7 +254,7 @@ echo " Next steps:"
|
|||
echo " 1. Restart Xcode (Cmd+Q, then reopen)"
|
||||
echo " 2. Open your project"
|
||||
echo " 3. Type /context in the Agent panel"
|
||||
echo " 4. You should see vestige listed with 23 tools"
|
||||
echo " 4. You should see vestige listed with 18 tools"
|
||||
echo ""
|
||||
echo " Try it:"
|
||||
echo " \"Remember that this project uses SwiftUI with MVVM architecture\""
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue