Initial commit: Vestige v1.0.0 - Cognitive memory MCP server

FSRS-6 spaced repetition, spreading activation, synaptic tagging,
hippocampal indexing, and 130 years of memory research.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Sam Valladares 2026-01-25 01:31:03 -06:00
commit f9c60eb5a7
169 changed files with 97206 additions and 0 deletions

View file

@ -0,0 +1,54 @@
[package]
name = "vestige-mcp"
version = "1.0.0"
edition = "2021"
description = "Cognitive memory MCP server for Claude - FSRS-6, spreading activation, synaptic tagging, and 130 years of memory research"
authors = ["samvallad33"]
license = "MIT OR Apache-2.0"
keywords = ["mcp", "ai", "memory", "fsrs", "neuroscience", "cognitive-science", "spaced-repetition"]
categories = ["command-line-utilities", "database"]
repository = "https://github.com/samvallad33/vestige"
[[bin]]
name = "vestige-mcp"
path = "src/main.rs"
[dependencies]
# ============================================================================
# VESTIGE CORE - The cognitive science engine
# ============================================================================
# Includes: FSRS-6, spreading activation, synaptic tagging, hippocampal indexing,
# memory states, context memory, importance signals, dreams, and more
vestige-core = { version = "1.0.0", path = "../vestige-core", features = ["full"] }
# ============================================================================
# MCP Server Dependencies
# ============================================================================
# Async runtime
tokio = { version = "1", features = ["full", "io-std"] }
# Serialization
serde = { version = "1", features = ["derive"] }
serde_json = "1"
# Date/Time
chrono = { version = "0.4", features = ["serde"] }
# UUID
uuid = { version = "1", features = ["v4", "serde"] }
# Error handling
thiserror = "2"
# Logging
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "json"] }
# Platform directories
directories = "6"
# Official Anthropic MCP Rust SDK
rmcp = "0.14"
[dev-dependencies]
tempfile = "3"

View file

@ -0,0 +1,115 @@
# Vestige MCP Server
A bleeding-edge Rust MCP (Model Context Protocol) server for Vestige - providing Claude and other AI assistants with long-term memory capabilities.
## Features
- **FSRS-6 Algorithm**: State-of-the-art spaced repetition (21 parameters, personalized decay)
- **Dual-Strength Memory Model**: Based on Bjork & Bjork 1992 cognitive science research
- **Local Semantic Embeddings**: BGE-base-en-v1.5 (768d) via fastembed v5 (no external API)
- **HNSW Vector Search**: USearch-based, 20x faster than FAISS
- **Hybrid Search**: BM25 + semantic with RRF fusion
- **Codebase Memory**: Remember patterns, decisions, and context
## Installation
```bash
cd /path/to/vestige/crates/vestige-mcp
cargo build --release
```
Binary will be at `target/release/vestige-mcp`
## Claude Desktop Configuration
Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
```json
{
"mcpServers": {
"vestige": {
"command": "/path/to/vestige-mcp"
}
}
}
```
## Available Tools
### Core Memory
| Tool | Description |
|------|-------------|
| `ingest` | Add new knowledge to memory |
| `recall` | Search and retrieve memories |
| `semantic_search` | Find conceptually similar content |
| `hybrid_search` | Combined keyword + semantic search |
| `get_knowledge` | Retrieve a specific memory by ID |
| `delete_knowledge` | Delete a memory |
| `mark_reviewed` | Review with FSRS rating (1-4) |
### Statistics & Maintenance
| Tool | Description |
|------|-------------|
| `get_stats` | Memory system statistics |
| `health_check` | System health status |
| `run_consolidation` | Apply decay, generate embeddings |
### Codebase Tools
| Tool | Description |
|------|-------------|
| `remember_pattern` | Remember code patterns |
| `remember_decision` | Remember architectural decisions |
| `get_codebase_context` | Get patterns and decisions |
## Available Resources
### Memory Resources
| URI | Description |
|-----|-------------|
| `memory://stats` | Current statistics |
| `memory://recent?n=10` | Recent memories |
| `memory://decaying` | Low retention memories |
| `memory://due` | Memories due for review |
### Codebase Resources
| URI | Description |
|-----|-------------|
| `codebase://structure` | Known codebases |
| `codebase://patterns` | Remembered patterns |
| `codebase://decisions` | Architectural decisions |
## Example Usage (with Claude)
```
User: Remember that we decided to use FSRS-6 instead of SM-2 because it's 20-30% more efficient.
Claude: [calls remember_decision]
I've recorded that architectural decision.
User: What decisions have we made about algorithms?
Claude: [calls get_codebase_context]
I found 1 decision:
- We decided to use FSRS-6 instead of SM-2 because it's 20-30% more efficient.
```
## Data Storage
- Database: `~/Library/Application Support/com.vestige.mcp/vestige-mcp.db` (macOS)
- Uses SQLite with FTS5 for full-text search
- Vector embeddings stored in separate table
## Protocol
- JSON-RPC 2.0 over stdio
- MCP Protocol Version: 2024-11-05
- Logging to stderr (stdout reserved for JSON-RPC)
## License
MIT

View file

@ -0,0 +1,161 @@
//! Vestige MCP Server v1.0 - Cognitive Memory for Claude
//!
//! A bleeding-edge Rust MCP (Model Context Protocol) server that provides
//! Claude and other AI assistants with long-term memory capabilities
//! powered by 130 years of memory research.
//!
//! Core Features:
//! - FSRS-6 spaced repetition algorithm (21 parameters, 30% more efficient than SM-2)
//! - Bjork dual-strength memory model
//! - Local semantic embeddings (768-dim BGE, no external API)
//! - HNSW vector search (20x faster than FAISS)
//! - Hybrid search (BM25 + semantic + RRF fusion)
//!
//! Neuroscience Features:
//! - Synaptic Tagging & Capture (retroactive importance)
//! - Spreading Activation Networks (multi-hop associations)
//! - Hippocampal Indexing (two-phase retrieval)
//! - Memory States (active/dormant/silent/unavailable)
//! - Context-Dependent Memory (encoding specificity)
//! - Multi-Channel Importance Signals
//! - Predictive Retrieval
//! - Prospective Memory (intentions with triggers)
//!
//! Advanced Features:
//! - Memory Dreams (insight generation during consolidation)
//! - Memory Compression
//! - Reconsolidation (memories editable on retrieval)
//! - Memory Chains (reasoning paths)
mod protocol;
mod resources;
mod server;
mod tools;
use std::io;
use std::path::PathBuf;
use std::sync::Arc;
use tokio::sync::Mutex;
use tracing::{error, info, Level};
use tracing_subscriber::EnvFilter;
// Use vestige-core for the cognitive science engine
use vestige_core::Storage;
use crate::protocol::stdio::StdioTransport;
use crate::server::McpServer;
/// Parse command-line arguments and return the optional data directory path.
/// Returns `None` for the path if no `--data-dir` was specified.
/// Exits the process if `--help` or `--version` is requested.
fn parse_args() -> Option<PathBuf> {
let args: Vec<String> = std::env::args().collect();
let mut data_dir: Option<PathBuf> = None;
let mut i = 1;
while i < args.len() {
match args[i].as_str() {
"--help" | "-h" => {
println!("Vestige MCP Server v{}", env!("CARGO_PKG_VERSION"));
println!();
println!("FSRS-6 powered AI memory server using the Model Context Protocol.");
println!();
println!("USAGE:");
println!(" vestige-mcp [OPTIONS]");
println!();
println!("OPTIONS:");
println!(" -h, --help Print help information");
println!(" -V, --version Print version information");
println!(" --data-dir <PATH> Custom data directory");
println!();
println!("ENVIRONMENT:");
println!(" RUST_LOG Log level filter (e.g., debug, info, warn, error)");
println!();
println!("EXAMPLES:");
println!(" vestige-mcp");
println!(" vestige-mcp --data-dir /custom/path");
println!(" RUST_LOG=debug vestige-mcp");
std::process::exit(0);
}
"--version" | "-V" => {
println!("vestige-mcp {}", env!("CARGO_PKG_VERSION"));
std::process::exit(0);
}
"--data-dir" => {
i += 1;
if i >= args.len() {
eprintln!("error: --data-dir requires a path argument");
eprintln!("Usage: vestige-mcp --data-dir <PATH>");
std::process::exit(1);
}
data_dir = Some(PathBuf::from(&args[i]));
}
arg if arg.starts_with("--data-dir=") => {
// Safe: we just verified the prefix exists with starts_with
let path = arg.strip_prefix("--data-dir=").unwrap_or("");
if path.is_empty() {
eprintln!("error: --data-dir requires a path argument");
eprintln!("Usage: vestige-mcp --data-dir <PATH>");
std::process::exit(1);
}
data_dir = Some(PathBuf::from(path));
}
arg => {
eprintln!("error: unknown argument '{}'", arg);
eprintln!("Usage: vestige-mcp [OPTIONS]");
eprintln!("Try 'vestige-mcp --help' for more information.");
std::process::exit(1);
}
}
i += 1;
}
data_dir
}
#[tokio::main]
async fn main() {
// Parse CLI arguments first (before logging init, so --help/--version work cleanly)
let data_dir = parse_args();
// Initialize logging to stderr (stdout is for JSON-RPC)
tracing_subscriber::fmt()
.with_env_filter(
EnvFilter::from_default_env()
.add_directive(Level::INFO.into())
)
.with_writer(io::stderr)
.with_target(false)
.with_ansi(false)
.init();
info!("Vestige MCP Server v{} starting...", env!("CARGO_PKG_VERSION"));
// Initialize storage with optional custom data directory
let storage = match Storage::new(data_dir) {
Ok(s) => {
info!("Storage initialized successfully");
Arc::new(Mutex::new(s))
}
Err(e) => {
error!("Failed to initialize storage: {}", e);
std::process::exit(1);
}
};
// Create MCP server
let server = McpServer::new(storage);
// Create stdio transport
let transport = StdioTransport::new();
info!("Starting MCP server on stdio...");
// Run the server
if let Err(e) = transport.run(server).await {
error!("Server error: {}", e);
std::process::exit(1);
}
info!("Vestige MCP Server shutting down");
}

View file

@ -0,0 +1,174 @@
//! MCP Protocol Messages
//!
//! Request and response types for MCP methods.
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::collections::HashMap;
// ============================================================================
// INITIALIZE
// ============================================================================
/// Initialize request from client
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct InitializeRequest {
pub protocol_version: String,
pub capabilities: ClientCapabilities,
pub client_info: ClientInfo,
}
impl Default for InitializeRequest {
fn default() -> Self {
Self {
protocol_version: "2024-11-05".to_string(),
capabilities: ClientCapabilities::default(),
client_info: ClientInfo {
name: "unknown".to_string(),
version: "0.0.0".to_string(),
},
}
}
}
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ClientCapabilities {
#[serde(skip_serializing_if = "Option::is_none")]
pub roots: Option<HashMap<String, Value>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub sampling: Option<HashMap<String, Value>>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ClientInfo {
pub name: String,
pub version: String,
}
/// Initialize response to client
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct InitializeResult {
pub protocol_version: String,
pub server_info: ServerInfo,
pub capabilities: ServerCapabilities,
#[serde(skip_serializing_if = "Option::is_none")]
pub instructions: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ServerInfo {
pub name: String,
pub version: String,
}
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ServerCapabilities {
#[serde(skip_serializing_if = "Option::is_none")]
pub tools: Option<HashMap<String, Value>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub resources: Option<HashMap<String, Value>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub prompts: Option<HashMap<String, Value>>,
}
// ============================================================================
// TOOLS
// ============================================================================
/// Tool description for tools/list
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ToolDescription {
pub name: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
pub input_schema: Value,
}
/// Result of tools/list
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ListToolsResult {
pub tools: Vec<ToolDescription>,
}
/// Request for tools/call
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct CallToolRequest {
pub name: String,
#[serde(default)]
pub arguments: Option<Value>,
}
/// Result of tools/call
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct CallToolResult {
pub content: Vec<ToolResultContent>,
#[serde(skip_serializing_if = "Option::is_none")]
pub is_error: Option<bool>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ToolResultContent {
#[serde(rename = "type")]
pub content_type: String,
pub text: String,
}
// ============================================================================
// RESOURCES
// ============================================================================
/// Resource description for resources/list
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ResourceDescription {
pub uri: String,
pub name: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub mime_type: Option<String>,
}
/// Result of resources/list
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ListResourcesResult {
pub resources: Vec<ResourceDescription>,
}
/// Request for resources/read
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ReadResourceRequest {
pub uri: String,
}
/// Result of resources/read
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ReadResourceResult {
pub contents: Vec<ResourceContent>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ResourceContent {
pub uri: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub mime_type: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub text: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub blob: Option<String>,
}

View file

@ -0,0 +1,7 @@
//! MCP Protocol Implementation
//!
//! JSON-RPC 2.0 over stdio for the Model Context Protocol.
pub mod messages;
pub mod stdio;
pub mod types;

View file

@ -0,0 +1,84 @@
//! stdio Transport for MCP
//!
//! Handles JSON-RPC communication over stdin/stdout.
use std::io::{self, BufRead, BufReader, Write};
use tracing::{debug, error, warn};
use super::types::{JsonRpcError, JsonRpcRequest, JsonRpcResponse};
use crate::server::McpServer;
/// stdio Transport for MCP server
pub struct StdioTransport;
impl StdioTransport {
pub fn new() -> Self {
Self
}
/// Run the MCP server over stdio
pub async fn run(self, mut server: McpServer) -> Result<(), io::Error> {
let stdin = io::stdin();
let stdout = io::stdout();
let reader = BufReader::new(stdin.lock());
let mut stdout = stdout.lock();
for line in reader.lines() {
let line = match line {
Ok(l) => l,
Err(e) => {
error!("Failed to read line: {}", e);
break;
}
};
if line.is_empty() {
continue;
}
debug!("Received: {}", line);
// Parse JSON-RPC request
let request: JsonRpcRequest = match serde_json::from_str(&line) {
Ok(r) => r,
Err(e) => {
warn!("Failed to parse request: {}", e);
let error_response = JsonRpcResponse::error(None, JsonRpcError::parse_error());
match serde_json::to_string(&error_response) {
Ok(response_json) => {
writeln!(stdout, "{}", response_json)?;
stdout.flush()?;
}
Err(e) => {
error!("Failed to serialize error response: {}", e);
}
}
continue;
}
};
// Handle the request
if let Some(response) = server.handle_request(request).await {
match serde_json::to_string(&response) {
Ok(response_json) => {
debug!("Sending: {}", response_json);
writeln!(stdout, "{}", response_json)?;
stdout.flush()?;
}
Err(e) => {
error!("Failed to serialize response: {}", e);
}
}
}
}
Ok(())
}
}
impl Default for StdioTransport {
fn default() -> Self {
Self::new()
}
}

View file

@ -0,0 +1,201 @@
//! MCP JSON-RPC Types
//!
//! Core types for JSON-RPC 2.0 protocol used by MCP.
use serde::{Deserialize, Serialize};
use serde_json::Value;
/// MCP Protocol Version
pub const MCP_VERSION: &str = "2025-11-25";
/// JSON-RPC version
pub const JSONRPC_VERSION: &str = "2.0";
// ============================================================================
// JSON-RPC REQUEST/RESPONSE
// ============================================================================
/// JSON-RPC Request
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct JsonRpcRequest {
pub jsonrpc: String,
pub id: Option<Value>,
pub method: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub params: Option<Value>,
}
/// JSON-RPC Response
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct JsonRpcResponse {
pub jsonrpc: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub id: Option<Value>,
#[serde(skip_serializing_if = "Option::is_none")]
pub result: Option<Value>,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<JsonRpcError>,
}
impl JsonRpcResponse {
pub fn success(id: Option<Value>, result: Value) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_string(),
id,
result: Some(result),
error: None,
}
}
pub fn error(id: Option<Value>, error: JsonRpcError) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_string(),
id,
result: None,
error: Some(error),
}
}
}
// ============================================================================
// JSON-RPC ERROR
// ============================================================================
/// JSON-RPC Error Codes (standard + MCP-specific)
#[derive(Debug, Clone, Copy)]
pub enum ErrorCode {
// Standard JSON-RPC errors
ParseError = -32700,
InvalidRequest = -32600,
MethodNotFound = -32601,
InvalidParams = -32602,
InternalError = -32603,
// MCP-specific errors (-32000 to -32099)
ConnectionClosed = -32000,
RequestTimeout = -32001,
ResourceNotFound = -32002,
ServerNotInitialized = -32003,
}
impl From<ErrorCode> for i32 {
fn from(code: ErrorCode) -> Self {
code as i32
}
}
/// JSON-RPC Error
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct JsonRpcError {
pub code: i32,
pub message: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub data: Option<Value>,
}
impl JsonRpcError {
fn new(code: ErrorCode, message: &str) -> Self {
Self {
code: code.into(),
message: message.to_string(),
data: None,
}
}
pub fn parse_error() -> Self {
Self::new(ErrorCode::ParseError, "Parse error")
}
pub fn method_not_found() -> Self {
Self::new(ErrorCode::MethodNotFound, "Method not found")
}
pub fn method_not_found_with_message(message: &str) -> Self {
Self::new(ErrorCode::MethodNotFound, message)
}
pub fn invalid_params(message: &str) -> Self {
Self::new(ErrorCode::InvalidParams, message)
}
pub fn internal_error(message: &str) -> Self {
Self::new(ErrorCode::InternalError, message)
}
pub fn server_not_initialized() -> Self {
Self::new(ErrorCode::ServerNotInitialized, "Server not initialized")
}
pub fn resource_not_found(uri: &str) -> Self {
Self::new(ErrorCode::ResourceNotFound, &format!("Resource not found: {}", uri))
}
}
impl std::fmt::Display for JsonRpcError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "[{}] {}", self.code, self.message)
}
}
impl std::error::Error for JsonRpcError {}
// ============================================================================
// TESTS
// ============================================================================
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_request_serialization() {
let request = JsonRpcRequest {
jsonrpc: JSONRPC_VERSION.to_string(),
id: Some(Value::Number(1.into())),
method: "test".to_string(),
params: Some(serde_json::json!({"key": "value"})),
};
let json = serde_json::to_string(&request).unwrap();
let parsed: JsonRpcRequest = serde_json::from_str(&json).unwrap();
assert_eq!(parsed.method, "test");
assert!(parsed.id.is_some()); // Has id, not a notification
}
#[test]
fn test_notification() {
let notification = JsonRpcRequest {
jsonrpc: JSONRPC_VERSION.to_string(),
id: None,
method: "notify".to_string(),
params: None,
};
assert!(notification.id.is_none()); // No id = notification
}
#[test]
fn test_response_success() {
let response = JsonRpcResponse::success(
Some(Value::Number(1.into())),
serde_json::json!({"result": "ok"}),
);
assert!(response.result.is_some());
assert!(response.error.is_none());
}
#[test]
fn test_response_error() {
let response = JsonRpcResponse::error(
Some(Value::Number(1.into())),
JsonRpcError::method_not_found(),
);
assert!(response.result.is_none());
assert!(response.error.is_some());
assert_eq!(response.error.unwrap().code, -32601);
}
}

View file

@ -0,0 +1,179 @@
//! Codebase Resources
//!
//! codebase:// URI scheme resources for the MCP server.
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{RecallInput, SearchMode, Storage};
/// Read a codebase:// resource
pub async fn read(storage: &Arc<Mutex<Storage>>, uri: &str) -> Result<String, String> {
let path = uri.strip_prefix("codebase://").unwrap_or("");
// Parse query parameters if present
let (path, query) = match path.split_once('?') {
Some((p, q)) => (p, Some(q)),
None => (path, None),
};
match path {
"structure" => read_structure(storage).await,
"patterns" => read_patterns(storage, query).await,
"decisions" => read_decisions(storage, query).await,
_ => Err(format!("Unknown codebase resource: {}", path)),
}
}
fn parse_codebase_param(query: Option<&str>) -> Option<String> {
query.and_then(|q| {
q.split('&').find_map(|pair| {
let (k, v) = pair.split_once('=')?;
if k == "codebase" {
Some(v.to_string())
} else {
None
}
})
})
}
async fn read_structure(storage: &Arc<Mutex<Storage>>) -> Result<String, String> {
let storage = storage.lock().await;
// Get all pattern and decision nodes to infer structure
// NOTE: We run separate queries because FTS5 sanitization removes OR operators
// and wraps queries in quotes (phrase search), so "pattern OR decision" would
// become a phrase search for "pattern decision" instead of matching either term.
let search_terms = ["pattern", "decision", "architecture"];
let mut all_nodes = Vec::new();
let mut seen_ids = std::collections::HashSet::new();
for term in &search_terms {
let input = RecallInput {
query: term.to_string(),
limit: 100,
min_retention: 0.0,
search_mode: SearchMode::Keyword,
valid_at: None,
};
for node in storage.recall(input).unwrap_or_default() {
if seen_ids.insert(node.id.clone()) {
all_nodes.push(node);
}
}
}
let nodes = all_nodes;
// Extract unique codebases from tags
let mut codebases: std::collections::HashSet<String> = std::collections::HashSet::new();
for node in &nodes {
for tag in &node.tags {
if let Some(codebase) = tag.strip_prefix("codebase:") {
codebases.insert(codebase.to_string());
}
}
}
let pattern_count = nodes.iter().filter(|n| n.node_type == "pattern").count();
let decision_count = nodes.iter().filter(|n| n.node_type == "decision").count();
let result = serde_json::json!({
"knownCodebases": codebases.into_iter().collect::<Vec<_>>(),
"totalPatterns": pattern_count,
"totalDecisions": decision_count,
"totalMemories": nodes.len(),
"hint": "Use codebase://patterns?codebase=NAME or codebase://decisions?codebase=NAME for specific codebase context",
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}
async fn read_patterns(storage: &Arc<Mutex<Storage>>, query: Option<&str>) -> Result<String, String> {
let storage = storage.lock().await;
let codebase = parse_codebase_param(query);
let search_query = match &codebase {
Some(cb) => format!("pattern codebase:{}", cb),
None => "pattern".to_string(),
};
let input = RecallInput {
query: search_query,
limit: 50,
min_retention: 0.0,
search_mode: SearchMode::Keyword,
valid_at: None,
};
let nodes = storage.recall(input).unwrap_or_default();
let patterns: Vec<serde_json::Value> = nodes
.iter()
.filter(|n| n.node_type == "pattern")
.map(|n| {
serde_json::json!({
"id": n.id,
"content": n.content,
"tags": n.tags,
"retentionStrength": n.retention_strength,
"createdAt": n.created_at.to_rfc3339(),
"source": n.source,
})
})
.collect();
let result = serde_json::json!({
"codebase": codebase,
"total": patterns.len(),
"patterns": patterns,
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}
async fn read_decisions(storage: &Arc<Mutex<Storage>>, query: Option<&str>) -> Result<String, String> {
let storage = storage.lock().await;
let codebase = parse_codebase_param(query);
let search_query = match &codebase {
Some(cb) => format!("decision architecture codebase:{}", cb),
None => "decision architecture".to_string(),
};
let input = RecallInput {
query: search_query,
limit: 50,
min_retention: 0.0,
search_mode: SearchMode::Keyword,
valid_at: None,
};
let nodes = storage.recall(input).unwrap_or_default();
let decisions: Vec<serde_json::Value> = nodes
.iter()
.filter(|n| n.node_type == "decision")
.map(|n| {
serde_json::json!({
"id": n.id,
"content": n.content,
"tags": n.tags,
"retentionStrength": n.retention_strength,
"createdAt": n.created_at.to_rfc3339(),
"source": n.source,
})
})
.collect();
let result = serde_json::json!({
"codebase": codebase,
"total": decisions.len(),
"decisions": decisions,
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}

View file

@ -0,0 +1,358 @@
//! Memory Resources
//!
//! memory:// URI scheme resources for the MCP server.
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage;
/// Read a memory:// resource
pub async fn read(storage: &Arc<Mutex<Storage>>, uri: &str) -> Result<String, String> {
let path = uri.strip_prefix("memory://").unwrap_or("");
// Parse query parameters if present
let (path, query) = match path.split_once('?') {
Some((p, q)) => (p, Some(q)),
None => (path, None),
};
match path {
"stats" => read_stats(storage).await,
"recent" => {
let n = parse_query_param(query, "n", 10);
read_recent(storage, n).await
}
"decaying" => read_decaying(storage).await,
"due" => read_due(storage).await,
"intentions" => read_intentions(storage).await,
"intentions/due" => read_triggered_intentions(storage).await,
"insights" => read_insights(storage).await,
"consolidation-log" => read_consolidation_log(storage).await,
_ => Err(format!("Unknown memory resource: {}", path)),
}
}
fn parse_query_param(query: Option<&str>, key: &str, default: i32) -> i32 {
query
.and_then(|q| {
q.split('&')
.find_map(|pair| {
let (k, v) = pair.split_once('=')?;
if k == key {
v.parse().ok()
} else {
None
}
})
})
.unwrap_or(default)
.clamp(1, 100)
}
async fn read_stats(storage: &Arc<Mutex<Storage>>) -> Result<String, String> {
let storage = storage.lock().await;
let stats = storage.get_stats().map_err(|e| e.to_string())?;
let embedding_coverage = if stats.total_nodes > 0 {
(stats.nodes_with_embeddings as f64 / stats.total_nodes as f64) * 100.0
} else {
0.0
};
let status = if stats.total_nodes == 0 {
"empty"
} else if stats.average_retention < 0.3 {
"critical"
} else if stats.average_retention < 0.5 {
"degraded"
} else {
"healthy"
};
let result = serde_json::json!({
"status": status,
"totalNodes": stats.total_nodes,
"nodesDueForReview": stats.nodes_due_for_review,
"averageRetention": stats.average_retention,
"averageStorageStrength": stats.average_storage_strength,
"averageRetrievalStrength": stats.average_retrieval_strength,
"oldestMemory": stats.oldest_memory.map(|d| d.to_rfc3339()),
"newestMemory": stats.newest_memory.map(|d| d.to_rfc3339()),
"nodesWithEmbeddings": stats.nodes_with_embeddings,
"embeddingCoverage": format!("{:.1}%", embedding_coverage),
"embeddingModel": stats.embedding_model,
"embeddingServiceReady": storage.is_embedding_ready(),
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}
async fn read_recent(storage: &Arc<Mutex<Storage>>, limit: i32) -> Result<String, String> {
let storage = storage.lock().await;
let nodes = storage.get_all_nodes(limit, 0).map_err(|e| e.to_string())?;
let items: Vec<serde_json::Value> = nodes
.iter()
.map(|n| {
serde_json::json!({
"id": n.id,
"summary": if n.content.len() > 200 {
format!("{}...", &n.content[..200])
} else {
n.content.clone()
},
"nodeType": n.node_type,
"tags": n.tags,
"createdAt": n.created_at.to_rfc3339(),
"retentionStrength": n.retention_strength,
})
})
.collect();
let result = serde_json::json!({
"total": nodes.len(),
"items": items,
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}
async fn read_decaying(storage: &Arc<Mutex<Storage>>) -> Result<String, String> {
let storage = storage.lock().await;
// Get nodes with low retention (below 0.5)
let all_nodes = storage.get_all_nodes(100, 0).map_err(|e| e.to_string())?;
let mut decaying: Vec<_> = all_nodes
.into_iter()
.filter(|n| n.retention_strength < 0.5)
.collect();
// Sort by retention strength (lowest first)
decaying.sort_by(|a, b| {
a.retention_strength
.partial_cmp(&b.retention_strength)
.unwrap_or(std::cmp::Ordering::Equal)
});
let items: Vec<serde_json::Value> = decaying
.iter()
.take(20)
.map(|n| {
let days_since_access = (chrono::Utc::now() - n.last_accessed).num_days();
serde_json::json!({
"id": n.id,
"summary": if n.content.len() > 200 {
format!("{}...", &n.content[..200])
} else {
n.content.clone()
},
"retentionStrength": n.retention_strength,
"daysSinceAccess": days_since_access,
"lastAccessed": n.last_accessed.to_rfc3339(),
"hint": if n.retention_strength < 0.2 {
"Critical - review immediately!"
} else {
"Should be reviewed soon"
},
})
})
.collect();
let result = serde_json::json!({
"total": decaying.len(),
"showing": items.len(),
"items": items,
"recommendation": if decaying.is_empty() {
"All memories are healthy!"
} else if decaying.len() > 10 {
"Many memories are decaying. Consider reviewing the most important ones."
} else {
"Some memories need attention. Review to strengthen retention."
},
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}
async fn read_due(storage: &Arc<Mutex<Storage>>) -> Result<String, String> {
let storage = storage.lock().await;
let nodes = storage.get_review_queue(20).map_err(|e| e.to_string())?;
let items: Vec<serde_json::Value> = nodes
.iter()
.map(|n| {
serde_json::json!({
"id": n.id,
"summary": if n.content.len() > 200 {
format!("{}...", &n.content[..200])
} else {
n.content.clone()
},
"nodeType": n.node_type,
"retentionStrength": n.retention_strength,
"difficulty": n.difficulty,
"reps": n.reps,
"nextReview": n.next_review.map(|d| d.to_rfc3339()),
})
})
.collect();
let result = serde_json::json!({
"total": nodes.len(),
"items": items,
"instruction": "Use mark_reviewed with rating 1-4 to complete review",
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}
async fn read_intentions(storage: &Arc<Mutex<Storage>>) -> Result<String, String> {
let storage = storage.lock().await;
let intentions = storage.get_active_intentions().map_err(|e| e.to_string())?;
let now = chrono::Utc::now();
let items: Vec<serde_json::Value> = intentions
.iter()
.map(|i| {
let is_overdue = i.deadline.map(|d| d < now).unwrap_or(false);
serde_json::json!({
"id": i.id,
"description": i.content,
"status": i.status,
"priority": match i.priority {
1 => "low",
3 => "high",
4 => "critical",
_ => "normal",
},
"createdAt": i.created_at.to_rfc3339(),
"deadline": i.deadline.map(|d| d.to_rfc3339()),
"isOverdue": is_overdue,
"snoozedUntil": i.snoozed_until.map(|d| d.to_rfc3339()),
})
})
.collect();
let overdue_count = items.iter().filter(|i| i["isOverdue"].as_bool().unwrap_or(false)).count();
let result = serde_json::json!({
"total": intentions.len(),
"overdueCount": overdue_count,
"items": items,
"tip": "Use set_intention to add new intentions, complete_intention to mark done",
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}
async fn read_triggered_intentions(storage: &Arc<Mutex<Storage>>) -> Result<String, String> {
let storage = storage.lock().await;
let overdue = storage.get_overdue_intentions().map_err(|e| e.to_string())?;
let now = chrono::Utc::now();
let items: Vec<serde_json::Value> = overdue
.iter()
.map(|i| {
let overdue_by = i.deadline.map(|d| {
let duration = now - d;
if duration.num_days() > 0 {
format!("{} days", duration.num_days())
} else if duration.num_hours() > 0 {
format!("{} hours", duration.num_hours())
} else {
format!("{} minutes", duration.num_minutes())
}
});
serde_json::json!({
"id": i.id,
"description": i.content,
"priority": match i.priority {
1 => "low",
3 => "high",
4 => "critical",
_ => "normal",
},
"deadline": i.deadline.map(|d| d.to_rfc3339()),
"overdueBy": overdue_by,
})
})
.collect();
let result = serde_json::json!({
"triggered": items.len(),
"items": items,
"message": if items.is_empty() {
"No overdue intentions!"
} else {
"These intentions need attention"
},
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}
async fn read_insights(storage: &Arc<Mutex<Storage>>) -> Result<String, String> {
let storage = storage.lock().await;
let insights = storage.get_insights(50).map_err(|e| e.to_string())?;
let pending: Vec<_> = insights.iter().filter(|i| i.feedback.is_none()).collect();
let accepted: Vec<_> = insights.iter().filter(|i| i.feedback.as_deref() == Some("accepted")).collect();
let items: Vec<serde_json::Value> = insights
.iter()
.map(|i| {
serde_json::json!({
"id": i.id,
"insight": i.insight,
"type": i.insight_type,
"confidence": i.confidence,
"noveltyScore": i.novelty_score,
"sourceMemories": i.source_memories,
"generatedAt": i.generated_at.to_rfc3339(),
"feedback": i.feedback,
})
})
.collect();
let result = serde_json::json!({
"total": insights.len(),
"pendingReview": pending.len(),
"accepted": accepted.len(),
"items": items,
"tip": "These insights were discovered during memory consolidation",
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}
async fn read_consolidation_log(storage: &Arc<Mutex<Storage>>) -> Result<String, String> {
let storage = storage.lock().await;
let history = storage.get_consolidation_history(20).map_err(|e| e.to_string())?;
let last_run = storage.get_last_consolidation().map_err(|e| e.to_string())?;
let items: Vec<serde_json::Value> = history
.iter()
.map(|h| {
serde_json::json!({
"id": h.id,
"completedAt": h.completed_at.to_rfc3339(),
"durationMs": h.duration_ms,
"memoriesReplayed": h.memories_replayed,
"connectionsFound": h.connections_found,
"connectionsStrengthened": h.connections_strengthened,
"connectionsPruned": h.connections_pruned,
"insightsGenerated": h.insights_generated,
})
})
.collect();
let result = serde_json::json!({
"lastRun": last_run.map(|d| d.to_rfc3339()),
"totalRuns": history.len(),
"history": items,
});
serde_json::to_string_pretty(&result).map_err(|e| e.to_string())
}

View file

@ -0,0 +1,6 @@
//! MCP Resources
//!
//! Resource implementations for the Vestige MCP server.
pub mod codebase;
pub mod memory;

View file

@ -0,0 +1,765 @@
//! MCP Server Core
//!
//! Handles the main MCP server logic, routing requests to appropriate
//! tool and resource handlers.
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::Mutex;
use tracing::{debug, info, warn};
use crate::protocol::messages::{
CallToolRequest, CallToolResult, InitializeRequest, InitializeResult,
ListResourcesResult, ListToolsResult, ReadResourceRequest, ReadResourceResult,
ResourceDescription, ServerCapabilities, ServerInfo, ToolDescription,
};
use crate::protocol::types::{JsonRpcError, JsonRpcRequest, JsonRpcResponse, MCP_VERSION};
use crate::resources;
use crate::tools;
use vestige_core::Storage;
/// MCP Server implementation
pub struct McpServer {
storage: Arc<Mutex<Storage>>,
initialized: bool,
}
impl McpServer {
pub fn new(storage: Arc<Mutex<Storage>>) -> Self {
Self {
storage,
initialized: false,
}
}
/// Handle an incoming JSON-RPC request
pub async fn handle_request(&mut self, request: JsonRpcRequest) -> Option<JsonRpcResponse> {
debug!("Handling request: {}", request.method);
// Check initialization for non-initialize requests
if !self.initialized && request.method != "initialize" && request.method != "notifications/initialized" {
warn!("Rejecting request '{}': server not initialized", request.method);
return Some(JsonRpcResponse::error(
request.id,
JsonRpcError::server_not_initialized(),
));
}
let result = match request.method.as_str() {
"initialize" => self.handle_initialize(request.params).await,
"notifications/initialized" => {
// Notification, no response needed
return None;
}
"tools/list" => self.handle_tools_list().await,
"tools/call" => self.handle_tools_call(request.params).await,
"resources/list" => self.handle_resources_list().await,
"resources/read" => self.handle_resources_read(request.params).await,
"ping" => Ok(serde_json::json!({})),
method => {
warn!("Unknown method: {}", method);
Err(JsonRpcError::method_not_found())
}
};
Some(match result {
Ok(result) => JsonRpcResponse::success(request.id, result),
Err(error) => JsonRpcResponse::error(request.id, error),
})
}
/// Handle initialize request
async fn handle_initialize(
&mut self,
params: Option<serde_json::Value>,
) -> Result<serde_json::Value, JsonRpcError> {
let _request: InitializeRequest = match params {
Some(p) => serde_json::from_value(p).map_err(|e| JsonRpcError::invalid_params(&e.to_string()))?,
None => InitializeRequest::default(),
};
self.initialized = true;
info!("MCP session initialized");
let result = InitializeResult {
protocol_version: MCP_VERSION.to_string(),
server_info: ServerInfo {
name: "vestige".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
},
capabilities: ServerCapabilities {
tools: Some({
let mut map = HashMap::new();
map.insert("listChanged".to_string(), serde_json::json!(false));
map
}),
resources: Some({
let mut map = HashMap::new();
map.insert("listChanged".to_string(), serde_json::json!(false));
map
}),
prompts: None,
},
instructions: Some(
"Vestige is your long-term memory system. Use it to remember important information, \
recall past knowledge, and maintain context across sessions. The system uses \
FSRS-6 spaced repetition to naturally decay memories over time - review important \
memories to strengthen them.".to_string()
),
};
serde_json::to_value(result).map_err(|e| JsonRpcError::internal_error(&e.to_string()))
}
/// Handle tools/list request
async fn handle_tools_list(&self) -> Result<serde_json::Value, JsonRpcError> {
let tools = vec![
// Core memory tools
ToolDescription {
name: "ingest".to_string(),
description: Some("Add new knowledge to memory. Use for facts, concepts, decisions, or any information worth remembering.".to_string()),
input_schema: tools::ingest::schema(),
},
ToolDescription {
name: "recall".to_string(),
description: Some("Search and retrieve knowledge from memory. Returns matches ranked by relevance and retention strength.".to_string()),
input_schema: tools::recall::schema(),
},
ToolDescription {
name: "semantic_search".to_string(),
description: Some("Search memories using semantic similarity. Finds conceptually related content even without keyword matches.".to_string()),
input_schema: tools::search::semantic_schema(),
},
ToolDescription {
name: "hybrid_search".to_string(),
description: Some("Combined keyword + semantic search with RRF fusion. Best for comprehensive retrieval.".to_string()),
input_schema: tools::search::hybrid_schema(),
},
ToolDescription {
name: "get_knowledge".to_string(),
description: Some("Retrieve a specific memory by ID.".to_string()),
input_schema: tools::knowledge::get_schema(),
},
ToolDescription {
name: "delete_knowledge".to_string(),
description: Some("Delete a memory by ID.".to_string()),
input_schema: tools::knowledge::delete_schema(),
},
ToolDescription {
name: "mark_reviewed".to_string(),
description: Some("Mark a memory as reviewed with FSRS rating (1=Again, 2=Hard, 3=Good, 4=Easy). Strengthens retention.".to_string()),
input_schema: tools::review::schema(),
},
// Stats and maintenance
ToolDescription {
name: "get_stats".to_string(),
description: Some("Get memory system statistics including total nodes, retention, and embedding status.".to_string()),
input_schema: tools::stats::stats_schema(),
},
ToolDescription {
name: "health_check".to_string(),
description: Some("Check health status of the memory system.".to_string()),
input_schema: tools::stats::health_schema(),
},
ToolDescription {
name: "run_consolidation".to_string(),
description: Some("Run memory consolidation cycle. Applies decay, promotes important memories, generates embeddings.".to_string()),
input_schema: tools::consolidate::schema(),
},
// Codebase tools
ToolDescription {
name: "remember_pattern".to_string(),
description: Some("Remember a code pattern or convention used in this codebase.".to_string()),
input_schema: tools::codebase::pattern_schema(),
},
ToolDescription {
name: "remember_decision".to_string(),
description: Some("Remember an architectural or design decision with its rationale.".to_string()),
input_schema: tools::codebase::decision_schema(),
},
ToolDescription {
name: "get_codebase_context".to_string(),
description: Some("Get remembered patterns and decisions for the current codebase.".to_string()),
input_schema: tools::codebase::context_schema(),
},
// Prospective memory (intentions)
ToolDescription {
name: "set_intention".to_string(),
description: Some("Remember to do something in the future. Supports time, context, or event triggers. Example: 'Remember to review error handling when I'm in the payments module'.".to_string()),
input_schema: tools::intentions::set_schema(),
},
ToolDescription {
name: "check_intentions".to_string(),
description: Some("Check if any intentions should be triggered based on current context. Returns triggered and pending intentions.".to_string()),
input_schema: tools::intentions::check_schema(),
},
ToolDescription {
name: "complete_intention".to_string(),
description: Some("Mark an intention as complete/fulfilled.".to_string()),
input_schema: tools::intentions::complete_schema(),
},
ToolDescription {
name: "snooze_intention".to_string(),
description: Some("Snooze an intention for a specified number of minutes.".to_string()),
input_schema: tools::intentions::snooze_schema(),
},
ToolDescription {
name: "list_intentions".to_string(),
description: Some("List all intentions, optionally filtered by status.".to_string()),
input_schema: tools::intentions::list_schema(),
},
// Neuroscience tools
ToolDescription {
name: "get_memory_state".to_string(),
description: Some("Get the cognitive state (Active/Dormant/Silent/Unavailable) of a memory based on accessibility.".to_string()),
input_schema: tools::memory_states::get_schema(),
},
ToolDescription {
name: "list_by_state".to_string(),
description: Some("List memories grouped by cognitive state.".to_string()),
input_schema: tools::memory_states::list_schema(),
},
ToolDescription {
name: "state_stats".to_string(),
description: Some("Get statistics about memory state distribution.".to_string()),
input_schema: tools::memory_states::stats_schema(),
},
ToolDescription {
name: "trigger_importance".to_string(),
description: Some("Trigger retroactive importance to strengthen recent memories. Based on Synaptic Tagging & Capture (Frey & Morris 1997).".to_string()),
input_schema: tools::tagging::trigger_schema(),
},
ToolDescription {
name: "find_tagged".to_string(),
description: Some("Find memories with high retention (tagged/strengthened memories).".to_string()),
input_schema: tools::tagging::find_schema(),
},
ToolDescription {
name: "tagging_stats".to_string(),
description: Some("Get synaptic tagging and retention statistics.".to_string()),
input_schema: tools::tagging::stats_schema(),
},
ToolDescription {
name: "match_context".to_string(),
description: Some("Search memories with context-dependent retrieval. Based on Tulving's Encoding Specificity Principle (1973).".to_string()),
input_schema: tools::context::schema(),
},
];
let result = ListToolsResult { tools };
serde_json::to_value(result).map_err(|e| JsonRpcError::internal_error(&e.to_string()))
}
/// Handle tools/call request
async fn handle_tools_call(
&self,
params: Option<serde_json::Value>,
) -> Result<serde_json::Value, JsonRpcError> {
let request: CallToolRequest = match params {
Some(p) => serde_json::from_value(p).map_err(|e| JsonRpcError::invalid_params(&e.to_string()))?,
None => return Err(JsonRpcError::invalid_params("Missing tool call parameters")),
};
let result = match request.name.as_str() {
// Core memory tools
"ingest" => tools::ingest::execute(&self.storage, request.arguments).await,
"recall" => tools::recall::execute(&self.storage, request.arguments).await,
"semantic_search" => tools::search::execute_semantic(&self.storage, request.arguments).await,
"hybrid_search" => tools::search::execute_hybrid(&self.storage, request.arguments).await,
"get_knowledge" => tools::knowledge::execute_get(&self.storage, request.arguments).await,
"delete_knowledge" => tools::knowledge::execute_delete(&self.storage, request.arguments).await,
"mark_reviewed" => tools::review::execute(&self.storage, request.arguments).await,
// Stats and maintenance
"get_stats" => tools::stats::execute_stats(&self.storage).await,
"health_check" => tools::stats::execute_health(&self.storage).await,
"run_consolidation" => tools::consolidate::execute(&self.storage).await,
// Codebase tools
"remember_pattern" => tools::codebase::execute_pattern(&self.storage, request.arguments).await,
"remember_decision" => tools::codebase::execute_decision(&self.storage, request.arguments).await,
"get_codebase_context" => tools::codebase::execute_context(&self.storage, request.arguments).await,
// Prospective memory (intentions)
"set_intention" => tools::intentions::execute_set(&self.storage, request.arguments).await,
"check_intentions" => tools::intentions::execute_check(&self.storage, request.arguments).await,
"complete_intention" => tools::intentions::execute_complete(&self.storage, request.arguments).await,
"snooze_intention" => tools::intentions::execute_snooze(&self.storage, request.arguments).await,
"list_intentions" => tools::intentions::execute_list(&self.storage, request.arguments).await,
// Neuroscience tools
"get_memory_state" => tools::memory_states::execute_get(&self.storage, request.arguments).await,
"list_by_state" => tools::memory_states::execute_list(&self.storage, request.arguments).await,
"state_stats" => tools::memory_states::execute_stats(&self.storage).await,
"trigger_importance" => tools::tagging::execute_trigger(&self.storage, request.arguments).await,
"find_tagged" => tools::tagging::execute_find(&self.storage, request.arguments).await,
"tagging_stats" => tools::tagging::execute_stats(&self.storage).await,
"match_context" => tools::context::execute(&self.storage, request.arguments).await,
name => {
return Err(JsonRpcError::method_not_found_with_message(&format!(
"Unknown tool: {}",
name
)));
}
};
match result {
Ok(content) => {
let call_result = CallToolResult {
content: vec![crate::protocol::messages::ToolResultContent {
content_type: "text".to_string(),
text: serde_json::to_string_pretty(&content).unwrap_or_else(|_| content.to_string()),
}],
is_error: Some(false),
};
serde_json::to_value(call_result).map_err(|e| JsonRpcError::internal_error(&e.to_string()))
}
Err(e) => {
let call_result = CallToolResult {
content: vec![crate::protocol::messages::ToolResultContent {
content_type: "text".to_string(),
text: serde_json::json!({ "error": e }).to_string(),
}],
is_error: Some(true),
};
serde_json::to_value(call_result).map_err(|e| JsonRpcError::internal_error(&e.to_string()))
}
}
}
/// Handle resources/list request
async fn handle_resources_list(&self) -> Result<serde_json::Value, JsonRpcError> {
let resources = vec![
// Memory resources
ResourceDescription {
uri: "memory://stats".to_string(),
name: "Memory Statistics".to_string(),
description: Some("Current memory system statistics and health status".to_string()),
mime_type: Some("application/json".to_string()),
},
ResourceDescription {
uri: "memory://recent".to_string(),
name: "Recent Memories".to_string(),
description: Some("Recently added memories (last 10)".to_string()),
mime_type: Some("application/json".to_string()),
},
ResourceDescription {
uri: "memory://decaying".to_string(),
name: "Decaying Memories".to_string(),
description: Some("Memories with low retention that need review".to_string()),
mime_type: Some("application/json".to_string()),
},
ResourceDescription {
uri: "memory://due".to_string(),
name: "Due for Review".to_string(),
description: Some("Memories scheduled for review today".to_string()),
mime_type: Some("application/json".to_string()),
},
// Codebase resources
ResourceDescription {
uri: "codebase://structure".to_string(),
name: "Codebase Structure".to_string(),
description: Some("Remembered project structure and organization".to_string()),
mime_type: Some("application/json".to_string()),
},
ResourceDescription {
uri: "codebase://patterns".to_string(),
name: "Code Patterns".to_string(),
description: Some("Remembered code patterns and conventions".to_string()),
mime_type: Some("application/json".to_string()),
},
ResourceDescription {
uri: "codebase://decisions".to_string(),
name: "Architectural Decisions".to_string(),
description: Some("Remembered architectural and design decisions".to_string()),
mime_type: Some("application/json".to_string()),
},
// Prospective memory resources
ResourceDescription {
uri: "memory://intentions".to_string(),
name: "Active Intentions".to_string(),
description: Some("Future intentions (prospective memory) waiting to be triggered".to_string()),
mime_type: Some("application/json".to_string()),
},
ResourceDescription {
uri: "memory://intentions/due".to_string(),
name: "Triggered Intentions".to_string(),
description: Some("Intentions that have been triggered or are overdue".to_string()),
mime_type: Some("application/json".to_string()),
},
];
let result = ListResourcesResult { resources };
serde_json::to_value(result).map_err(|e| JsonRpcError::internal_error(&e.to_string()))
}
/// Handle resources/read request
async fn handle_resources_read(
&self,
params: Option<serde_json::Value>,
) -> Result<serde_json::Value, JsonRpcError> {
let request: ReadResourceRequest = match params {
Some(p) => serde_json::from_value(p).map_err(|e| JsonRpcError::invalid_params(&e.to_string()))?,
None => return Err(JsonRpcError::invalid_params("Missing resource URI")),
};
let uri = &request.uri;
let content = if uri.starts_with("memory://") {
resources::memory::read(&self.storage, uri).await
} else if uri.starts_with("codebase://") {
resources::codebase::read(&self.storage, uri).await
} else {
Err(format!("Unknown resource scheme: {}", uri))
};
match content {
Ok(text) => {
let result = ReadResourceResult {
contents: vec![crate::protocol::messages::ResourceContent {
uri: uri.clone(),
mime_type: Some("application/json".to_string()),
text: Some(text),
blob: None,
}],
};
serde_json::to_value(result).map_err(|e| JsonRpcError::internal_error(&e.to_string()))
}
Err(e) => Err(JsonRpcError::internal_error(&e)),
}
}
}
// ============================================================================
// TESTS
// ============================================================================
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
/// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) {
let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir)
}
/// Create a test server with temporary storage
async fn test_server() -> (McpServer, TempDir) {
let (storage, dir) = test_storage().await;
let server = McpServer::new(storage);
(server, dir)
}
/// Create a JSON-RPC request
fn make_request(method: &str, params: Option<serde_json::Value>) -> JsonRpcRequest {
JsonRpcRequest {
jsonrpc: "2.0".to_string(),
id: Some(serde_json::json!(1)),
method: method.to_string(),
params,
}
}
// ========================================================================
// INITIALIZATION TESTS
// ========================================================================
#[tokio::test]
async fn test_initialize_sets_initialized_flag() {
let (mut server, _dir) = test_server().await;
assert!(!server.initialized);
let request = make_request("initialize", Some(serde_json::json!({
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {
"name": "test-client",
"version": "1.0.0"
}
})));
let response = server.handle_request(request).await;
assert!(response.is_some());
let response = response.unwrap();
assert!(response.result.is_some());
assert!(response.error.is_none());
assert!(server.initialized);
}
#[tokio::test]
async fn test_initialize_returns_server_info() {
let (mut server, _dir) = test_server().await;
let request = make_request("initialize", None);
let response = server.handle_request(request).await.unwrap();
let result = response.result.unwrap();
assert_eq!(result["protocolVersion"], MCP_VERSION);
assert_eq!(result["serverInfo"]["name"], "vestige");
assert!(result["capabilities"]["tools"].is_object());
assert!(result["capabilities"]["resources"].is_object());
assert!(result["instructions"].is_string());
}
#[tokio::test]
async fn test_initialize_with_default_params() {
let (mut server, _dir) = test_server().await;
let request = make_request("initialize", None);
let response = server.handle_request(request).await.unwrap();
assert!(response.result.is_some());
assert!(response.error.is_none());
}
// ========================================================================
// UNINITIALIZED SERVER TESTS
// ========================================================================
#[tokio::test]
async fn test_request_before_initialize_returns_error() {
let (mut server, _dir) = test_server().await;
let request = make_request("tools/list", None);
let response = server.handle_request(request).await.unwrap();
assert!(response.result.is_none());
assert!(response.error.is_some());
let error = response.error.unwrap();
assert_eq!(error.code, -32003); // ServerNotInitialized
}
#[tokio::test]
async fn test_ping_before_initialize_returns_error() {
let (mut server, _dir) = test_server().await;
let request = make_request("ping", None);
let response = server.handle_request(request).await.unwrap();
assert!(response.error.is_some());
assert_eq!(response.error.unwrap().code, -32003);
}
// ========================================================================
// NOTIFICATION TESTS
// ========================================================================
#[tokio::test]
async fn test_initialized_notification_returns_none() {
let (mut server, _dir) = test_server().await;
// First initialize
let init_request = make_request("initialize", None);
server.handle_request(init_request).await;
// Send initialized notification
let notification = make_request("notifications/initialized", None);
let response = server.handle_request(notification).await;
// Notifications should return None
assert!(response.is_none());
}
// ========================================================================
// TOOLS/LIST TESTS
// ========================================================================
#[tokio::test]
async fn test_tools_list_returns_all_tools() {
let (mut server, _dir) = test_server().await;
// Initialize first
let init_request = make_request("initialize", None);
server.handle_request(init_request).await;
let request = make_request("tools/list", None);
let response = server.handle_request(request).await.unwrap();
let result = response.result.unwrap();
let tools = result["tools"].as_array().unwrap();
// Verify expected tools are present
let tool_names: Vec<&str> = tools
.iter()
.map(|t| t["name"].as_str().unwrap())
.collect();
assert!(tool_names.contains(&"ingest"));
assert!(tool_names.contains(&"recall"));
assert!(tool_names.contains(&"semantic_search"));
assert!(tool_names.contains(&"hybrid_search"));
assert!(tool_names.contains(&"get_knowledge"));
assert!(tool_names.contains(&"delete_knowledge"));
assert!(tool_names.contains(&"mark_reviewed"));
assert!(tool_names.contains(&"get_stats"));
assert!(tool_names.contains(&"health_check"));
assert!(tool_names.contains(&"run_consolidation"));
assert!(tool_names.contains(&"set_intention"));
assert!(tool_names.contains(&"check_intentions"));
assert!(tool_names.contains(&"complete_intention"));
assert!(tool_names.contains(&"snooze_intention"));
assert!(tool_names.contains(&"list_intentions"));
}
#[tokio::test]
async fn test_tools_have_descriptions_and_schemas() {
let (mut server, _dir) = test_server().await;
let init_request = make_request("initialize", None);
server.handle_request(init_request).await;
let request = make_request("tools/list", None);
let response = server.handle_request(request).await.unwrap();
let result = response.result.unwrap();
let tools = result["tools"].as_array().unwrap();
for tool in tools {
assert!(tool["name"].is_string(), "Tool should have a name");
assert!(tool["description"].is_string(), "Tool should have a description");
assert!(tool["inputSchema"].is_object(), "Tool should have an input schema");
}
}
// ========================================================================
// RESOURCES/LIST TESTS
// ========================================================================
#[tokio::test]
async fn test_resources_list_returns_all_resources() {
let (mut server, _dir) = test_server().await;
let init_request = make_request("initialize", None);
server.handle_request(init_request).await;
let request = make_request("resources/list", None);
let response = server.handle_request(request).await.unwrap();
let result = response.result.unwrap();
let resources = result["resources"].as_array().unwrap();
// Verify expected resources are present
let resource_uris: Vec<&str> = resources
.iter()
.map(|r| r["uri"].as_str().unwrap())
.collect();
assert!(resource_uris.contains(&"memory://stats"));
assert!(resource_uris.contains(&"memory://recent"));
assert!(resource_uris.contains(&"memory://decaying"));
assert!(resource_uris.contains(&"memory://due"));
assert!(resource_uris.contains(&"memory://intentions"));
assert!(resource_uris.contains(&"codebase://structure"));
assert!(resource_uris.contains(&"codebase://patterns"));
assert!(resource_uris.contains(&"codebase://decisions"));
}
#[tokio::test]
async fn test_resources_have_descriptions() {
let (mut server, _dir) = test_server().await;
let init_request = make_request("initialize", None);
server.handle_request(init_request).await;
let request = make_request("resources/list", None);
let response = server.handle_request(request).await.unwrap();
let result = response.result.unwrap();
let resources = result["resources"].as_array().unwrap();
for resource in resources {
assert!(resource["uri"].is_string(), "Resource should have a URI");
assert!(resource["name"].is_string(), "Resource should have a name");
assert!(resource["description"].is_string(), "Resource should have a description");
}
}
// ========================================================================
// UNKNOWN METHOD TESTS
// ========================================================================
#[tokio::test]
async fn test_unknown_method_returns_error() {
let (mut server, _dir) = test_server().await;
// Initialize first
let init_request = make_request("initialize", None);
server.handle_request(init_request).await;
let request = make_request("unknown/method", None);
let response = server.handle_request(request).await.unwrap();
assert!(response.result.is_none());
assert!(response.error.is_some());
let error = response.error.unwrap();
assert_eq!(error.code, -32601); // MethodNotFound
}
#[tokio::test]
async fn test_unknown_tool_returns_error() {
let (mut server, _dir) = test_server().await;
let init_request = make_request("initialize", None);
server.handle_request(init_request).await;
let request = make_request("tools/call", Some(serde_json::json!({
"name": "nonexistent_tool",
"arguments": {}
})));
let response = server.handle_request(request).await.unwrap();
assert!(response.error.is_some());
assert_eq!(response.error.unwrap().code, -32601);
}
// ========================================================================
// PING TESTS
// ========================================================================
#[tokio::test]
async fn test_ping_returns_empty_object() {
let (mut server, _dir) = test_server().await;
let init_request = make_request("initialize", None);
server.handle_request(init_request).await;
let request = make_request("ping", None);
let response = server.handle_request(request).await.unwrap();
assert!(response.result.is_some());
assert!(response.error.is_none());
assert_eq!(response.result.unwrap(), serde_json::json!({}));
}
// ========================================================================
// TOOLS/CALL TESTS
// ========================================================================
#[tokio::test]
async fn test_tools_call_missing_params_returns_error() {
let (mut server, _dir) = test_server().await;
let init_request = make_request("initialize", None);
server.handle_request(init_request).await;
let request = make_request("tools/call", None);
let response = server.handle_request(request).await.unwrap();
assert!(response.error.is_some());
assert_eq!(response.error.unwrap().code, -32602); // InvalidParams
}
#[tokio::test]
async fn test_tools_call_invalid_params_returns_error() {
let (mut server, _dir) = test_server().await;
let init_request = make_request("initialize", None);
server.handle_request(init_request).await;
let request = make_request("tools/call", Some(serde_json::json!({
"invalid": "params"
})));
let response = server.handle_request(request).await.unwrap();
assert!(response.error.is_some());
assert_eq!(response.error.unwrap().code, -32602);
}
}

View file

@ -0,0 +1,304 @@
//! Codebase Tools
//!
//! Remember patterns, decisions, and context about codebases.
//! This is a differentiating feature for AI-assisted development.
use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{IngestInput, Storage};
/// Input schema for remember_pattern tool
pub fn pattern_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name/title for this pattern"
},
"description": {
"type": "string",
"description": "Detailed description of the pattern"
},
"files": {
"type": "array",
"items": { "type": "string" },
"description": "Files where this pattern is used"
},
"codebase": {
"type": "string",
"description": "Codebase/project identifier (e.g., 'vestige-tauri')"
}
},
"required": ["name", "description"]
})
}
/// Input schema for remember_decision tool
pub fn decision_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"decision": {
"type": "string",
"description": "The architectural or design decision made"
},
"rationale": {
"type": "string",
"description": "Why this decision was made"
},
"alternatives": {
"type": "array",
"items": { "type": "string" },
"description": "Alternatives that were considered"
},
"files": {
"type": "array",
"items": { "type": "string" },
"description": "Files affected by this decision"
},
"codebase": {
"type": "string",
"description": "Codebase/project identifier"
}
},
"required": ["decision", "rationale"]
})
}
/// Input schema for get_codebase_context tool
pub fn context_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"codebase": {
"type": "string",
"description": "Codebase/project identifier to get context for"
},
"limit": {
"type": "integer",
"description": "Maximum items per category (default: 10)",
"default": 10
}
},
"required": []
})
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct PatternArgs {
name: String,
description: String,
files: Option<Vec<String>>,
codebase: Option<String>,
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct DecisionArgs {
decision: String,
rationale: String,
alternatives: Option<Vec<String>>,
files: Option<Vec<String>>,
codebase: Option<String>,
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct ContextArgs {
codebase: Option<String>,
limit: Option<i32>,
}
pub async fn execute_pattern(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: PatternArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
};
if args.name.trim().is_empty() {
return Err("Pattern name cannot be empty".to_string());
}
// Build content with structured format
let mut content = format!("# Code Pattern: {}\n\n{}", args.name, args.description);
if let Some(ref files) = args.files {
if !files.is_empty() {
content.push_str("\n\n## Files:\n");
for f in files {
content.push_str(&format!("- {}\n", f));
}
}
}
// Build tags
let mut tags = vec!["pattern".to_string(), "codebase".to_string()];
if let Some(ref codebase) = args.codebase {
tags.push(format!("codebase:{}", codebase));
}
let input = IngestInput {
content,
node_type: "pattern".to_string(),
source: args.codebase.clone(),
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags,
valid_from: None,
valid_until: None,
};
let mut storage = storage.lock().await;
let node = storage.ingest(input).map_err(|e| e.to_string())?;
Ok(serde_json::json!({
"success": true,
"nodeId": node.id,
"patternName": args.name,
"message": format!("Pattern '{}' remembered successfully", args.name),
}))
}
pub async fn execute_decision(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: DecisionArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
};
if args.decision.trim().is_empty() {
return Err("Decision cannot be empty".to_string());
}
// Build content with structured format (ADR-like)
let mut content = format!(
"# Decision: {}\n\n## Context\n\n{}\n\n## Decision\n\n{}",
&args.decision[..args.decision.len().min(50)],
args.rationale,
args.decision
);
if let Some(ref alternatives) = args.alternatives {
if !alternatives.is_empty() {
content.push_str("\n\n## Alternatives Considered:\n");
for alt in alternatives {
content.push_str(&format!("- {}\n", alt));
}
}
}
if let Some(ref files) = args.files {
if !files.is_empty() {
content.push_str("\n\n## Affected Files:\n");
for f in files {
content.push_str(&format!("- {}\n", f));
}
}
}
// Build tags
let mut tags = vec!["decision".to_string(), "architecture".to_string(), "codebase".to_string()];
if let Some(ref codebase) = args.codebase {
tags.push(format!("codebase:{}", codebase));
}
let input = IngestInput {
content,
node_type: "decision".to_string(),
source: args.codebase.clone(),
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags,
valid_from: None,
valid_until: None,
};
let mut storage = storage.lock().await;
let node = storage.ingest(input).map_err(|e| e.to_string())?;
Ok(serde_json::json!({
"success": true,
"nodeId": node.id,
"message": "Architectural decision remembered successfully",
}))
}
pub async fn execute_context(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: ContextArgs = args
.map(|v| serde_json::from_value(v))
.transpose()
.map_err(|e| format!("Invalid arguments: {}", e))?
.unwrap_or(ContextArgs {
codebase: None,
limit: Some(10),
});
let limit = args.limit.unwrap_or(10).clamp(1, 50);
let storage = storage.lock().await;
// Build tag filter for codebase
// Tags are stored as: ["pattern", "codebase", "codebase:vestige"]
// We search for the "codebase:{name}" tag
let tag_filter = args.codebase.as_ref().map(|cb| format!("codebase:{}", cb));
// Query patterns by node_type and tag
let patterns = storage
.get_nodes_by_type_and_tag("pattern", tag_filter.as_deref(), limit)
.unwrap_or_default();
// Query decisions by node_type and tag
let decisions = storage
.get_nodes_by_type_and_tag("decision", tag_filter.as_deref(), limit)
.unwrap_or_default();
let formatted_patterns: Vec<Value> = patterns
.iter()
.map(|n| {
serde_json::json!({
"id": n.id,
"content": n.content,
"tags": n.tags,
"retentionStrength": n.retention_strength,
"createdAt": n.created_at.to_rfc3339(),
})
})
.collect();
let formatted_decisions: Vec<Value> = decisions
.iter()
.map(|n| {
serde_json::json!({
"id": n.id,
"content": n.content,
"tags": n.tags,
"retentionStrength": n.retention_strength,
"createdAt": n.created_at.to_rfc3339(),
})
})
.collect();
Ok(serde_json::json!({
"codebase": args.codebase,
"patterns": {
"count": formatted_patterns.len(),
"items": formatted_patterns,
},
"decisions": {
"count": formatted_decisions.len(),
"items": formatted_decisions,
},
}))
}

View file

@ -0,0 +1,38 @@
//! Consolidation Tool
//!
//! Run memory consolidation cycle with FSRS decay and embedding generation.
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage;
/// Input schema for run_consolidation tool
pub fn schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {},
})
}
pub async fn execute(storage: &Arc<Mutex<Storage>>) -> Result<Value, String> {
let mut storage = storage.lock().await;
let result = storage.run_consolidation().map_err(|e| e.to_string())?;
Ok(serde_json::json!({
"success": true,
"nodesProcessed": result.nodes_processed,
"nodesPromoted": result.nodes_promoted,
"nodesPruned": result.nodes_pruned,
"decayApplied": result.decay_applied,
"embeddingsGenerated": result.embeddings_generated,
"durationMs": result.duration_ms,
"message": format!(
"Consolidation complete: {} nodes processed, {} embeddings generated, {}ms",
result.nodes_processed,
result.embeddings_generated,
result.duration_ms
),
}))
}

View file

@ -0,0 +1,173 @@
//! Context-Dependent Memory Tool
//!
//! Retrieval based on encoding context match.
//! Based on Tulving & Thomson's Encoding Specificity Principle (1973).
use chrono::Utc;
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{RecallInput, SearchMode, Storage};
/// Input schema for match_context tool
pub fn schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query for content matching"
},
"topics": {
"type": "array",
"items": { "type": "string" },
"description": "Active topics in current context"
},
"project": {
"type": "string",
"description": "Current project name"
},
"mood": {
"type": "string",
"enum": ["positive", "negative", "neutral"],
"description": "Current emotional state"
},
"time_weight": {
"type": "number",
"description": "Weight for temporal context (0.0-1.0, default: 0.3)"
},
"topic_weight": {
"type": "number",
"description": "Weight for topical context (0.0-1.0, default: 0.4)"
},
"limit": {
"type": "integer",
"description": "Maximum results (default: 10)"
}
},
"required": ["query"]
})
}
pub async fn execute(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args = args.ok_or("Missing arguments")?;
let query = args["query"]
.as_str()
.ok_or("query is required")?;
let topics: Vec<String> = args["topics"]
.as_array()
.map(|arr| arr.iter().filter_map(|v| v.as_str().map(String::from)).collect())
.unwrap_or_default();
let project = args["project"].as_str().map(String::from);
let mood = args["mood"].as_str().unwrap_or("neutral");
let time_weight = args["time_weight"].as_f64().unwrap_or(0.3);
let topic_weight = args["topic_weight"].as_f64().unwrap_or(0.4);
let limit = args["limit"].as_i64().unwrap_or(10) as i32;
let storage = storage.lock().await;
let now = Utc::now();
// Get candidate memories
let recall_input = RecallInput {
query: query.to_string(),
limit: limit * 2, // Get more, then filter
min_retention: 0.0,
search_mode: SearchMode::Hybrid,
valid_at: None,
};
let candidates = storage.recall(recall_input)
.map_err(|e| e.to_string())?;
// Score by context match (simplified implementation)
let mut scored_results: Vec<_> = candidates.into_iter()
.map(|mem| {
// Calculate context score based on:
// 1. Temporal proximity (how recent)
let hours_ago = (now - mem.created_at).num_hours() as f64;
let temporal_score = 1.0 / (1.0 + hours_ago / 24.0); // Decay over days
// 2. Tag overlap with topics
let tag_overlap = if topics.is_empty() {
0.5 // Neutral if no topics specified
} else {
let matching = mem.tags.iter()
.filter(|t| topics.iter().any(|topic| topic.to_lowercase().contains(&t.to_lowercase())))
.count();
matching as f64 / topics.len().max(1) as f64
};
// 3. Project match
let project_score = match (&project, &mem.source) {
(Some(p), Some(s)) if s.to_lowercase().contains(&p.to_lowercase()) => 1.0,
(Some(_), None) => 0.0,
(None, _) => 0.5,
_ => 0.3,
};
// 4. Emotional match (simplified)
let mood_score = match mood {
"positive" if mem.sentiment_score > 0.0 => 0.8,
"negative" if mem.sentiment_score < 0.0 => 0.8,
"neutral" if mem.sentiment_score.abs() < 0.3 => 0.8,
_ => 0.5,
};
// Combine scores
let context_score = temporal_score * time_weight
+ tag_overlap * topic_weight
+ project_score * 0.2
+ mood_score * 0.1;
let combined_score = mem.retention_strength * 0.5 + context_score * 0.5;
(mem, context_score, combined_score)
})
.collect();
// Sort by combined score (handle NaN safely)
scored_results.sort_by(|a, b| b.2.partial_cmp(&a.2).unwrap_or(std::cmp::Ordering::Equal));
scored_results.truncate(limit as usize);
let results: Vec<Value> = scored_results.into_iter()
.map(|(mem, ctx_score, combined)| {
serde_json::json!({
"id": mem.id,
"content": mem.content,
"retentionStrength": mem.retention_strength,
"contextScore": ctx_score,
"combinedScore": combined,
"tags": mem.tags,
"createdAt": mem.created_at.to_rfc3339()
})
})
.collect();
Ok(serde_json::json!({
"success": true,
"query": query,
"currentContext": {
"topics": topics,
"project": project,
"mood": mood
},
"weights": {
"temporal": time_weight,
"topical": topic_weight
},
"resultCount": results.len(),
"results": results,
"science": {
"theory": "Encoding Specificity Principle (Tulving & Thomson, 1973)",
"principle": "Memory retrieval is most effective when retrieval context matches encoding context"
}
}))
}

View file

@ -0,0 +1,286 @@
//! Ingest Tool
//!
//! Add new knowledge to memory.
use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{IngestInput, Storage};
/// Input schema for ingest tool
pub fn schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"content": {
"type": "string",
"description": "The content to remember"
},
"node_type": {
"type": "string",
"description": "Type of knowledge: fact, concept, event, person, place, note, pattern, decision",
"default": "fact"
},
"tags": {
"type": "array",
"items": { "type": "string" },
"description": "Tags for categorization"
},
"source": {
"type": "string",
"description": "Source or reference for this knowledge"
}
},
"required": ["content"]
})
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct IngestArgs {
content: String,
node_type: Option<String>,
tags: Option<Vec<String>>,
source: Option<String>,
}
pub async fn execute(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: IngestArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
};
// Validate content
if args.content.trim().is_empty() {
return Err("Content cannot be empty".to_string());
}
if args.content.len() > 1_000_000 {
return Err("Content too large (max 1MB)".to_string());
}
let input = IngestInput {
content: args.content,
node_type: args.node_type.unwrap_or_else(|| "fact".to_string()),
source: args.source,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: args.tags.unwrap_or_default(),
valid_from: None,
valid_until: None,
};
let mut storage = storage.lock().await;
let node = storage.ingest(input).map_err(|e| e.to_string())?;
Ok(serde_json::json!({
"success": true,
"nodeId": node.id,
"message": format!("Knowledge ingested successfully. Node ID: {}", node.id),
"hasEmbedding": node.has_embedding.unwrap_or(false),
}))
}
// ============================================================================
// TESTS
// ============================================================================
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
/// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) {
let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir)
}
// ========================================================================
// INPUT VALIDATION TESTS
// ========================================================================
#[tokio::test]
async fn test_ingest_empty_content_fails() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({ "content": "" });
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("empty"));
}
#[tokio::test]
async fn test_ingest_whitespace_only_content_fails() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({ "content": " \n\t " });
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("empty"));
}
#[tokio::test]
async fn test_ingest_missing_arguments_fails() {
let (storage, _dir) = test_storage().await;
let result = execute(&storage, None).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Missing arguments"));
}
#[tokio::test]
async fn test_ingest_missing_content_field_fails() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({ "node_type": "fact" });
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Invalid arguments"));
}
// ========================================================================
// LARGE CONTENT TESTS
// ========================================================================
#[tokio::test]
async fn test_ingest_large_content_fails() {
let (storage, _dir) = test_storage().await;
// Create content larger than 1MB
let large_content = "x".repeat(1_000_001);
let args = serde_json::json!({ "content": large_content });
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("too large"));
}
#[tokio::test]
async fn test_ingest_exactly_1mb_succeeds() {
let (storage, _dir) = test_storage().await;
// Create content exactly 1MB
let exact_content = "x".repeat(1_000_000);
let args = serde_json::json!({ "content": exact_content });
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
}
// ========================================================================
// SUCCESSFUL INGEST TESTS
// ========================================================================
#[tokio::test]
async fn test_ingest_basic_content_succeeds() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({
"content": "This is a test fact to remember."
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["success"], true);
assert!(value["nodeId"].is_string());
assert!(value["message"].as_str().unwrap().contains("successfully"));
}
#[tokio::test]
async fn test_ingest_with_node_type() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({
"content": "Error handling should use Result<T, E> pattern.",
"node_type": "pattern"
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["success"], true);
}
#[tokio::test]
async fn test_ingest_with_tags() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({
"content": "The Rust programming language emphasizes safety.",
"tags": ["rust", "programming", "safety"]
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["success"], true);
}
#[tokio::test]
async fn test_ingest_with_source() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({
"content": "MCP protocol version 2024-11-05 is the current standard.",
"source": "https://modelcontextprotocol.io/spec"
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["success"], true);
}
#[tokio::test]
async fn test_ingest_with_all_optional_fields() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({
"content": "Complex memory with all metadata.",
"node_type": "decision",
"tags": ["architecture", "design"],
"source": "team meeting notes"
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["success"], true);
assert!(value["nodeId"].is_string());
}
// ========================================================================
// NODE TYPE DEFAULTS
// ========================================================================
#[tokio::test]
async fn test_ingest_default_node_type_is_fact() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({
"content": "Default type test content."
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
// Verify node was created - the default type is "fact"
let node_id = result.unwrap()["nodeId"].as_str().unwrap().to_string();
let storage_lock = storage.lock().await;
let node = storage_lock.get_node(&node_id).unwrap().unwrap();
assert_eq!(node.node_type, "fact");
}
// ========================================================================
// SCHEMA TESTS
// ========================================================================
#[test]
fn test_schema_has_required_fields() {
let schema_value = schema();
assert_eq!(schema_value["type"], "object");
assert!(schema_value["properties"]["content"].is_object());
assert!(schema_value["required"].as_array().unwrap().contains(&serde_json::json!("content")));
}
#[test]
fn test_schema_has_optional_fields() {
let schema_value = schema();
assert!(schema_value["properties"]["node_type"].is_object());
assert!(schema_value["properties"]["tags"].is_object());
assert!(schema_value["properties"]["source"].is_object());
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,115 @@
//! Knowledge Tools
//!
//! Get and delete specific knowledge nodes.
use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage;
/// Input schema for get_knowledge tool
pub fn get_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The ID of the knowledge node to retrieve"
}
},
"required": ["id"]
})
}
/// Input schema for delete_knowledge tool
pub fn delete_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The ID of the knowledge node to delete"
}
},
"required": ["id"]
})
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct KnowledgeArgs {
id: String,
}
pub async fn execute_get(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: KnowledgeArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
};
// Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
let storage = storage.lock().await;
let node = storage.get_node(&args.id).map_err(|e| e.to_string())?;
match node {
Some(n) => Ok(serde_json::json!({
"found": true,
"node": {
"id": n.id,
"content": n.content,
"nodeType": n.node_type,
"createdAt": n.created_at.to_rfc3339(),
"updatedAt": n.updated_at.to_rfc3339(),
"lastAccessed": n.last_accessed.to_rfc3339(),
"stability": n.stability,
"difficulty": n.difficulty,
"reps": n.reps,
"lapses": n.lapses,
"storageStrength": n.storage_strength,
"retrievalStrength": n.retrieval_strength,
"retentionStrength": n.retention_strength,
"sentimentScore": n.sentiment_score,
"sentimentMagnitude": n.sentiment_magnitude,
"nextReview": n.next_review.map(|d| d.to_rfc3339()),
"source": n.source,
"tags": n.tags,
"hasEmbedding": n.has_embedding,
"embeddingModel": n.embedding_model,
}
})),
None => Ok(serde_json::json!({
"found": false,
"nodeId": args.id,
"message": "Node not found",
})),
}
}
pub async fn execute_delete(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: KnowledgeArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
};
// Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
let mut storage = storage.lock().await;
let deleted = storage.delete_node(&args.id).map_err(|e| e.to_string())?;
Ok(serde_json::json!({
"success": deleted,
"nodeId": args.id,
"message": if deleted { "Node deleted successfully" } else { "Node not found" },
}))
}

View file

@ -0,0 +1,277 @@
//! Memory States Tool
//!
//! Query and manage memory states (Active, Dormant, Silent, Unavailable).
//! Based on accessibility continuum theory.
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{MemoryState, Storage};
// Accessibility thresholds based on retention strength
const ACCESSIBILITY_ACTIVE: f64 = 0.7;
const ACCESSIBILITY_DORMANT: f64 = 0.4;
const ACCESSIBILITY_SILENT: f64 = 0.1;
/// Compute accessibility score from memory strengths
/// Combines retention, retrieval, and storage strengths
fn compute_accessibility(retention: f64, retrieval: f64, storage: f64) -> f64 {
// Weighted combination: retention is most important for accessibility
retention * 0.5 + retrieval * 0.3 + storage * 0.2
}
/// Determine memory state from accessibility score
fn state_from_accessibility(accessibility: f64) -> MemoryState {
if accessibility >= ACCESSIBILITY_ACTIVE {
MemoryState::Active
} else if accessibility >= ACCESSIBILITY_DORMANT {
MemoryState::Dormant
} else if accessibility >= ACCESSIBILITY_SILENT {
MemoryState::Silent
} else {
MemoryState::Unavailable
}
}
/// Input schema for get_memory_state tool
pub fn get_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"memory_id": {
"type": "string",
"description": "The memory ID to check state for"
}
},
"required": ["memory_id"]
})
}
/// Input schema for list_by_state tool
pub fn list_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"state": {
"type": "string",
"enum": ["active", "dormant", "silent", "unavailable"],
"description": "Filter memories by state"
},
"limit": {
"type": "integer",
"description": "Maximum results (default: 20)"
}
},
"required": []
})
}
/// Input schema for state_stats tool
pub fn stats_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {},
})
}
/// Get the cognitive state of a specific memory
pub async fn execute_get(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args = args.ok_or("Missing arguments")?;
let memory_id = args["memory_id"]
.as_str()
.ok_or("memory_id is required")?;
let storage = storage.lock().await;
// Get the memory
let memory = storage.get_node(memory_id)
.map_err(|e| format!("Error: {}", e))?
.ok_or("Memory not found")?;
// Calculate accessibility score
let accessibility = compute_accessibility(
memory.retention_strength,
memory.retrieval_strength,
memory.storage_strength,
);
// Determine state
let state = state_from_accessibility(accessibility);
let state_description = match state {
MemoryState::Active => "Easily retrievable - this memory is fresh and accessible",
MemoryState::Dormant => "Retrievable with effort - may need cues to recall",
MemoryState::Silent => "Difficult to retrieve - exists but hard to access",
MemoryState::Unavailable => "Cannot be retrieved - needs significant reinforcement",
};
Ok(serde_json::json!({
"memoryId": memory_id,
"content": memory.content,
"state": format!("{:?}", state),
"accessibility": accessibility,
"description": state_description,
"components": {
"retentionStrength": memory.retention_strength,
"retrievalStrength": memory.retrieval_strength,
"storageStrength": memory.storage_strength
},
"thresholds": {
"active": ACCESSIBILITY_ACTIVE,
"dormant": ACCESSIBILITY_DORMANT,
"silent": ACCESSIBILITY_SILENT
}
}))
}
/// List memories by state
pub async fn execute_list(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args = args.unwrap_or(serde_json::json!({}));
let state_filter = args["state"].as_str();
let limit = args["limit"].as_i64().unwrap_or(20) as usize;
let storage = storage.lock().await;
// Get all memories
let memories = storage.get_all_nodes(500, 0)
.map_err(|e| e.to_string())?;
// Categorize by state
let mut active = Vec::new();
let mut dormant = Vec::new();
let mut silent = Vec::new();
let mut unavailable = Vec::new();
for memory in memories {
let accessibility = compute_accessibility(
memory.retention_strength,
memory.retrieval_strength,
memory.storage_strength,
);
let entry = serde_json::json!({
"id": memory.id,
"content": memory.content,
"accessibility": accessibility,
"retentionStrength": memory.retention_strength
});
let state = state_from_accessibility(accessibility);
match state {
MemoryState::Active => active.push(entry),
MemoryState::Dormant => dormant.push(entry),
MemoryState::Silent => silent.push(entry),
MemoryState::Unavailable => unavailable.push(entry),
}
}
// Apply filter and limit
let result = match state_filter {
Some("active") => serde_json::json!({
"state": "active",
"count": active.len(),
"memories": active.into_iter().take(limit).collect::<Vec<_>>()
}),
Some("dormant") => serde_json::json!({
"state": "dormant",
"count": dormant.len(),
"memories": dormant.into_iter().take(limit).collect::<Vec<_>>()
}),
Some("silent") => serde_json::json!({
"state": "silent",
"count": silent.len(),
"memories": silent.into_iter().take(limit).collect::<Vec<_>>()
}),
Some("unavailable") => serde_json::json!({
"state": "unavailable",
"count": unavailable.len(),
"memories": unavailable.into_iter().take(limit).collect::<Vec<_>>()
}),
_ => serde_json::json!({
"all": true,
"active": { "count": active.len(), "memories": active.into_iter().take(limit).collect::<Vec<_>>() },
"dormant": { "count": dormant.len(), "memories": dormant.into_iter().take(limit).collect::<Vec<_>>() },
"silent": { "count": silent.len(), "memories": silent.into_iter().take(limit).collect::<Vec<_>>() },
"unavailable": { "count": unavailable.len(), "memories": unavailable.into_iter().take(limit).collect::<Vec<_>>() }
})
};
Ok(result)
}
/// Get memory state statistics
pub async fn execute_stats(
storage: &Arc<Mutex<Storage>>,
) -> Result<Value, String> {
let storage = storage.lock().await;
let memories = storage.get_all_nodes(1000, 0)
.map_err(|e| e.to_string())?;
let total = memories.len();
let mut active_count = 0;
let mut dormant_count = 0;
let mut silent_count = 0;
let mut unavailable_count = 0;
let mut total_accessibility = 0.0;
for memory in &memories {
let accessibility = compute_accessibility(
memory.retention_strength,
memory.retrieval_strength,
memory.storage_strength,
);
total_accessibility += accessibility;
let state = state_from_accessibility(accessibility);
match state {
MemoryState::Active => active_count += 1,
MemoryState::Dormant => dormant_count += 1,
MemoryState::Silent => silent_count += 1,
MemoryState::Unavailable => unavailable_count += 1,
}
}
let avg_accessibility = if total > 0 { total_accessibility / total as f64 } else { 0.0 };
Ok(serde_json::json!({
"totalMemories": total,
"averageAccessibility": avg_accessibility,
"stateDistribution": {
"active": {
"count": active_count,
"percentage": if total > 0 { (active_count as f64 / total as f64) * 100.0 } else { 0.0 }
},
"dormant": {
"count": dormant_count,
"percentage": if total > 0 { (dormant_count as f64 / total as f64) * 100.0 } else { 0.0 }
},
"silent": {
"count": silent_count,
"percentage": if total > 0 { (silent_count as f64 / total as f64) * 100.0 } else { 0.0 }
},
"unavailable": {
"count": unavailable_count,
"percentage": if total > 0 { (unavailable_count as f64 / total as f64) * 100.0 } else { 0.0 }
}
},
"thresholds": {
"active": ACCESSIBILITY_ACTIVE,
"dormant": ACCESSIBILITY_DORMANT,
"silent": ACCESSIBILITY_SILENT
},
"science": {
"theory": "Accessibility Continuum (Tulving, 1983)",
"principle": "Memories exist on a continuum from highly accessible to completely inaccessible"
}
}))
}

View file

@ -0,0 +1,18 @@
//! MCP Tools
//!
//! Tool implementations for the Vestige MCP server.
pub mod codebase;
pub mod consolidate;
pub mod ingest;
pub mod intentions;
pub mod knowledge;
pub mod recall;
pub mod review;
pub mod search;
pub mod stats;
// Neuroscience-inspired tools
pub mod context;
pub mod memory_states;
pub mod tagging;

View file

@ -0,0 +1,403 @@
//! Recall Tool
//!
//! Search and retrieve knowledge from memory.
use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{RecallInput, SearchMode, Storage};
/// Input schema for recall tool
pub fn schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query"
},
"limit": {
"type": "integer",
"description": "Maximum number of results (default: 10)",
"default": 10,
"minimum": 1,
"maximum": 100
},
"min_retention": {
"type": "number",
"description": "Minimum retention strength (0.0-1.0, default: 0.0)",
"default": 0.0,
"minimum": 0.0,
"maximum": 1.0
}
},
"required": ["query"]
})
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct RecallArgs {
query: String,
limit: Option<i32>,
min_retention: Option<f64>,
}
pub async fn execute(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: RecallArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
};
if args.query.trim().is_empty() {
return Err("Query cannot be empty".to_string());
}
let input = RecallInput {
query: args.query.clone(),
limit: args.limit.unwrap_or(10).clamp(1, 100),
min_retention: args.min_retention.unwrap_or(0.0).clamp(0.0, 1.0),
search_mode: SearchMode::Hybrid,
valid_at: None,
};
let storage = storage.lock().await;
let nodes = storage.recall(input).map_err(|e| e.to_string())?;
let results: Vec<Value> = nodes
.iter()
.map(|n| {
serde_json::json!({
"id": n.id,
"content": n.content,
"nodeType": n.node_type,
"retentionStrength": n.retention_strength,
"stability": n.stability,
"difficulty": n.difficulty,
"reps": n.reps,
"tags": n.tags,
"source": n.source,
"createdAt": n.created_at.to_rfc3339(),
"lastAccessed": n.last_accessed.to_rfc3339(),
"nextReview": n.next_review.map(|d| d.to_rfc3339()),
})
})
.collect();
Ok(serde_json::json!({
"query": args.query,
"total": results.len(),
"results": results,
}))
}
// ============================================================================
// TESTS
// ============================================================================
#[cfg(test)]
mod tests {
use super::*;
use vestige_core::IngestInput;
use tempfile::TempDir;
/// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) {
let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir)
}
/// Helper to ingest test content
async fn ingest_test_content(storage: &Arc<Mutex<Storage>>, content: &str) -> String {
let input = IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
};
let mut storage_lock = storage.lock().await;
let node = storage_lock.ingest(input).unwrap();
node.id
}
// ========================================================================
// QUERY VALIDATION TESTS
// ========================================================================
#[tokio::test]
async fn test_recall_empty_query_fails() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({ "query": "" });
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("empty"));
}
#[tokio::test]
async fn test_recall_whitespace_only_query_fails() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({ "query": " \t\n " });
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("empty"));
}
#[tokio::test]
async fn test_recall_missing_arguments_fails() {
let (storage, _dir) = test_storage().await;
let result = execute(&storage, None).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Missing arguments"));
}
#[tokio::test]
async fn test_recall_missing_query_field_fails() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({ "limit": 10 });
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Invalid arguments"));
}
// ========================================================================
// LIMIT CLAMPING TESTS
// ========================================================================
#[tokio::test]
async fn test_recall_limit_clamped_to_minimum() {
let (storage, _dir) = test_storage().await;
// Ingest some content first
ingest_test_content(&storage, "Test content for limit clamping").await;
// Try with limit 0 - should clamp to 1
let args = serde_json::json!({
"query": "test",
"limit": 0
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
}
#[tokio::test]
async fn test_recall_limit_clamped_to_maximum() {
let (storage, _dir) = test_storage().await;
// Ingest some content first
ingest_test_content(&storage, "Test content for max limit").await;
// Try with limit 1000 - should clamp to 100
let args = serde_json::json!({
"query": "test",
"limit": 1000
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
}
#[tokio::test]
async fn test_recall_negative_limit_clamped() {
let (storage, _dir) = test_storage().await;
ingest_test_content(&storage, "Test content for negative limit").await;
let args = serde_json::json!({
"query": "test",
"limit": -5
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
}
// ========================================================================
// MIN_RETENTION CLAMPING TESTS
// ========================================================================
#[tokio::test]
async fn test_recall_min_retention_clamped_to_zero() {
let (storage, _dir) = test_storage().await;
ingest_test_content(&storage, "Test content for retention clamping").await;
let args = serde_json::json!({
"query": "test",
"min_retention": -0.5
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
}
#[tokio::test]
async fn test_recall_min_retention_clamped_to_one() {
let (storage, _dir) = test_storage().await;
ingest_test_content(&storage, "Test content for max retention").await;
let args = serde_json::json!({
"query": "test",
"min_retention": 1.5
});
let result = execute(&storage, Some(args)).await;
// Should succeed but return no results (retention > 1.0 clamped to 1.0)
assert!(result.is_ok());
}
// ========================================================================
// SUCCESSFUL RECALL TESTS
// ========================================================================
#[tokio::test]
async fn test_recall_basic_query_succeeds() {
let (storage, _dir) = test_storage().await;
ingest_test_content(&storage, "The Rust programming language is memory safe.").await;
let args = serde_json::json!({ "query": "rust" });
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["query"], "rust");
assert!(value["total"].is_number());
assert!(value["results"].is_array());
}
#[tokio::test]
async fn test_recall_returns_matching_content() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Python is a dynamic programming language.").await;
let args = serde_json::json!({ "query": "python" });
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
let results = value["results"].as_array().unwrap();
assert!(!results.is_empty());
assert_eq!(results[0]["id"], node_id);
}
#[tokio::test]
async fn test_recall_with_limit() {
let (storage, _dir) = test_storage().await;
// Ingest multiple items
ingest_test_content(&storage, "Testing content one").await;
ingest_test_content(&storage, "Testing content two").await;
ingest_test_content(&storage, "Testing content three").await;
let args = serde_json::json!({
"query": "testing",
"limit": 2
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
let results = value["results"].as_array().unwrap();
assert!(results.len() <= 2);
}
#[tokio::test]
async fn test_recall_empty_database_returns_empty_array() {
// With hybrid search (keyword + semantic), any query against content
// may return low-similarity matches. The true "no matches" case
// is an empty database.
let (storage, _dir) = test_storage().await;
// Don't ingest anything - database is empty
let args = serde_json::json!({ "query": "anything" });
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["total"], 0);
assert!(value["results"].as_array().unwrap().is_empty());
}
#[tokio::test]
async fn test_recall_result_contains_expected_fields() {
let (storage, _dir) = test_storage().await;
ingest_test_content(&storage, "Testing field presence in recall results.").await;
let args = serde_json::json!({ "query": "testing" });
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
let results = value["results"].as_array().unwrap();
if !results.is_empty() {
let first = &results[0];
assert!(first["id"].is_string());
assert!(first["content"].is_string());
assert!(first["nodeType"].is_string());
assert!(first["retentionStrength"].is_number());
assert!(first["stability"].is_number());
assert!(first["difficulty"].is_number());
assert!(first["reps"].is_number());
assert!(first["createdAt"].is_string());
assert!(first["lastAccessed"].is_string());
}
}
// ========================================================================
// DEFAULT VALUES TESTS
// ========================================================================
#[tokio::test]
async fn test_recall_default_limit_is_10() {
let (storage, _dir) = test_storage().await;
// Ingest more than 10 items
for i in 0..15 {
ingest_test_content(&storage, &format!("Item number {}", i)).await;
}
let args = serde_json::json!({ "query": "item" });
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
let results = value["results"].as_array().unwrap();
assert!(results.len() <= 10);
}
// ========================================================================
// SCHEMA TESTS
// ========================================================================
#[test]
fn test_schema_has_required_fields() {
let schema_value = schema();
assert_eq!(schema_value["type"], "object");
assert!(schema_value["properties"]["query"].is_object());
assert!(schema_value["required"].as_array().unwrap().contains(&serde_json::json!("query")));
}
#[test]
fn test_schema_has_optional_fields() {
let schema_value = schema();
assert!(schema_value["properties"]["limit"].is_object());
assert!(schema_value["properties"]["min_retention"].is_object());
}
#[test]
fn test_schema_limit_has_bounds() {
let schema_value = schema();
let limit_schema = &schema_value["properties"]["limit"];
assert_eq!(limit_schema["minimum"], 1);
assert_eq!(limit_schema["maximum"], 100);
assert_eq!(limit_schema["default"], 10);
}
#[test]
fn test_schema_min_retention_has_bounds() {
let schema_value = schema();
let retention_schema = &schema_value["properties"]["min_retention"];
assert_eq!(retention_schema["minimum"], 0.0);
assert_eq!(retention_schema["maximum"], 1.0);
assert_eq!(retention_schema["default"], 0.0);
}
}

View file

@ -0,0 +1,454 @@
//! Review Tool
//!
//! Mark memories as reviewed using FSRS-6 algorithm.
use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{Rating, Storage};
/// Input schema for mark_reviewed tool
pub fn schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The ID of the memory to review"
},
"rating": {
"type": "integer",
"description": "Review rating: 1=Again (forgot), 2=Hard, 3=Good, 4=Easy",
"minimum": 1,
"maximum": 4,
"default": 3
}
},
"required": ["id"]
})
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct ReviewArgs {
id: String,
rating: Option<i32>,
}
pub async fn execute(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: ReviewArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
};
// Validate UUID
uuid::Uuid::parse_str(&args.id).map_err(|_| "Invalid node ID format".to_string())?;
let rating_value = args.rating.unwrap_or(3);
if !(1..=4).contains(&rating_value) {
return Err("Rating must be between 1 and 4".to_string());
}
let rating = Rating::from_i32(rating_value)
.ok_or_else(|| "Invalid rating value".to_string())?;
let mut storage = storage.lock().await;
// Get node before review for comparison
let before = storage.get_node(&args.id).map_err(|e| e.to_string())?
.ok_or_else(|| format!("Node not found: {}", args.id))?;
let node = storage.mark_reviewed(&args.id, rating).map_err(|e| e.to_string())?;
let rating_name = match rating {
Rating::Again => "Again",
Rating::Hard => "Hard",
Rating::Good => "Good",
Rating::Easy => "Easy",
};
Ok(serde_json::json!({
"success": true,
"nodeId": node.id,
"rating": rating_name,
"fsrs": {
"previousRetention": before.retention_strength,
"newRetention": node.retention_strength,
"previousStability": before.stability,
"newStability": node.stability,
"difficulty": node.difficulty,
"reps": node.reps,
"lapses": node.lapses,
},
"nextReview": node.next_review.map(|d| d.to_rfc3339()),
"message": format!("Memory reviewed with rating '{}'. Retention: {:.2} -> {:.2}",
rating_name, before.retention_strength, node.retention_strength),
}))
}
// ============================================================================
// TESTS
// ============================================================================
#[cfg(test)]
mod tests {
use super::*;
use vestige_core::IngestInput;
use tempfile::TempDir;
/// Create a test storage instance with a temporary database
async fn test_storage() -> (Arc<Mutex<Storage>>, TempDir) {
let dir = TempDir::new().unwrap();
let storage = Storage::new(Some(dir.path().join("test.db"))).unwrap();
(Arc::new(Mutex::new(storage)), dir)
}
/// Helper to ingest test content and return node ID
async fn ingest_test_content(storage: &Arc<Mutex<Storage>>, content: &str) -> String {
let input = IngestInput {
content: content.to_string(),
node_type: "fact".to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: vec![],
valid_from: None,
valid_until: None,
};
let mut storage_lock = storage.lock().await;
let node = storage_lock.ingest(input).unwrap();
node.id
}
// ========================================================================
// RATING VALIDATION TESTS
// ========================================================================
#[tokio::test]
async fn test_review_rating_zero_fails() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for rating validation").await;
let args = serde_json::json!({
"id": node_id,
"rating": 0
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("between 1 and 4"));
}
#[tokio::test]
async fn test_review_rating_five_fails() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for high rating").await;
let args = serde_json::json!({
"id": node_id,
"rating": 5
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("between 1 and 4"));
}
#[tokio::test]
async fn test_review_rating_negative_fails() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for negative rating").await;
let args = serde_json::json!({
"id": node_id,
"rating": -1
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("between 1 and 4"));
}
#[tokio::test]
async fn test_review_rating_very_high_fails() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for very high rating").await;
let args = serde_json::json!({
"id": node_id,
"rating": 100
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("between 1 and 4"));
}
// ========================================================================
// VALID RATINGS TESTS
// ========================================================================
#[tokio::test]
async fn test_review_rating_again_succeeds() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for Again rating").await;
let args = serde_json::json!({
"id": node_id,
"rating": 1
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["rating"], "Again");
}
#[tokio::test]
async fn test_review_rating_hard_succeeds() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for Hard rating").await;
let args = serde_json::json!({
"id": node_id,
"rating": 2
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["rating"], "Hard");
}
#[tokio::test]
async fn test_review_rating_good_succeeds() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for Good rating").await;
let args = serde_json::json!({
"id": node_id,
"rating": 3
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["rating"], "Good");
}
#[tokio::test]
async fn test_review_rating_easy_succeeds() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for Easy rating").await;
let args = serde_json::json!({
"id": node_id,
"rating": 4
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["rating"], "Easy");
}
// ========================================================================
// NODE ID VALIDATION TESTS
// ========================================================================
#[tokio::test]
async fn test_review_invalid_uuid_fails() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({
"id": "not-a-valid-uuid",
"rating": 3
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Invalid node ID"));
}
#[tokio::test]
async fn test_review_nonexistent_node_fails() {
let (storage, _dir) = test_storage().await;
let fake_uuid = uuid::Uuid::new_v4().to_string();
let args = serde_json::json!({
"id": fake_uuid,
"rating": 3
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("not found"));
}
#[tokio::test]
async fn test_review_missing_id_fails() {
let (storage, _dir) = test_storage().await;
let args = serde_json::json!({
"rating": 3
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Invalid arguments"));
}
#[tokio::test]
async fn test_review_missing_arguments_fails() {
let (storage, _dir) = test_storage().await;
let result = execute(&storage, None).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("Missing arguments"));
}
// ========================================================================
// FSRS UPDATE TESTS
// ========================================================================
#[tokio::test]
async fn test_review_updates_reps_counter() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for reps counter").await;
let args = serde_json::json!({
"id": node_id,
"rating": 3
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["fsrs"]["reps"], 1);
}
#[tokio::test]
async fn test_review_multiple_times_increases_reps() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for multiple reviews").await;
// Review first time
let args = serde_json::json!({ "id": node_id, "rating": 3 });
execute(&storage, Some(args)).await.unwrap();
// Review second time
let args = serde_json::json!({ "id": node_id, "rating": 3 });
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["fsrs"]["reps"], 2);
}
#[tokio::test]
async fn test_same_day_again_does_not_count_as_lapse() {
// FSRS-6 treats same-day reviews differently - they don't increment lapses.
// This is by design: same-day reviews indicate the user is still learning,
// not that they've forgotten and need to re-learn (which is what lapses track).
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for lapses").await;
// First review to get out of new state
let args = serde_json::json!({ "id": node_id, "rating": 3 });
execute(&storage, Some(args)).await.unwrap();
// Immediate "Again" rating (same-day) should NOT count as a lapse
let args = serde_json::json!({ "id": node_id, "rating": 1 });
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
// Same-day reviews preserve lapse count per FSRS-6 algorithm
assert_eq!(value["fsrs"]["lapses"].as_i64().unwrap(), 0);
}
#[tokio::test]
async fn test_review_returns_next_review_date() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for next review").await;
let args = serde_json::json!({
"id": node_id,
"rating": 3
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert!(value["nextReview"].is_string());
}
// ========================================================================
// DEFAULT RATING TESTS
// ========================================================================
#[tokio::test]
async fn test_review_default_rating_is_good() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for default rating").await;
// Omit rating, should default to 3 (Good)
let args = serde_json::json!({
"id": node_id
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["rating"], "Good");
}
// ========================================================================
// RESPONSE FORMAT TESTS
// ========================================================================
#[tokio::test]
async fn test_review_response_contains_expected_fields() {
let (storage, _dir) = test_storage().await;
let node_id = ingest_test_content(&storage, "Test content for response format").await;
let args = serde_json::json!({
"id": node_id,
"rating": 3
});
let result = execute(&storage, Some(args)).await;
assert!(result.is_ok());
let value = result.unwrap();
assert_eq!(value["success"], true);
assert!(value["nodeId"].is_string());
assert!(value["rating"].is_string());
assert!(value["fsrs"].is_object());
assert!(value["fsrs"]["previousRetention"].is_number());
assert!(value["fsrs"]["newRetention"].is_number());
assert!(value["fsrs"]["previousStability"].is_number());
assert!(value["fsrs"]["newStability"].is_number());
assert!(value["fsrs"]["difficulty"].is_number());
assert!(value["fsrs"]["reps"].is_number());
assert!(value["fsrs"]["lapses"].is_number());
assert!(value["message"].is_string());
}
// ========================================================================
// SCHEMA TESTS
// ========================================================================
#[test]
fn test_schema_has_required_fields() {
let schema_value = schema();
assert_eq!(schema_value["type"], "object");
assert!(schema_value["properties"]["id"].is_object());
assert!(schema_value["required"].as_array().unwrap().contains(&serde_json::json!("id")));
}
#[test]
fn test_schema_rating_has_bounds() {
let schema_value = schema();
let rating_schema = &schema_value["properties"]["rating"];
assert_eq!(rating_schema["minimum"], 1);
assert_eq!(rating_schema["maximum"], 4);
assert_eq!(rating_schema["default"], 3);
}
}

View file

@ -0,0 +1,192 @@
//! Search Tools
//!
//! Semantic and hybrid search implementations.
use serde::Deserialize;
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::Storage;
/// Input schema for semantic_search tool
pub fn semantic_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query for semantic similarity"
},
"limit": {
"type": "integer",
"description": "Maximum number of results (default: 10)",
"default": 10,
"minimum": 1,
"maximum": 50
},
"min_similarity": {
"type": "number",
"description": "Minimum similarity threshold (0.0-1.0, default: 0.5)",
"default": 0.5,
"minimum": 0.0,
"maximum": 1.0
}
},
"required": ["query"]
})
}
/// Input schema for hybrid_search tool
pub fn hybrid_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query"
},
"limit": {
"type": "integer",
"description": "Maximum number of results (default: 10)",
"default": 10,
"minimum": 1,
"maximum": 50
},
"keyword_weight": {
"type": "number",
"description": "Weight for keyword search (0.0-1.0, default: 0.5)",
"default": 0.5,
"minimum": 0.0,
"maximum": 1.0
},
"semantic_weight": {
"type": "number",
"description": "Weight for semantic search (0.0-1.0, default: 0.5)",
"default": 0.5,
"minimum": 0.0,
"maximum": 1.0
}
},
"required": ["query"]
})
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct SemanticSearchArgs {
query: String,
limit: Option<i32>,
min_similarity: Option<f32>,
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct HybridSearchArgs {
query: String,
limit: Option<i32>,
keyword_weight: Option<f32>,
semantic_weight: Option<f32>,
}
pub async fn execute_semantic(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: SemanticSearchArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
};
if args.query.trim().is_empty() {
return Err("Query cannot be empty".to_string());
}
let storage = storage.lock().await;
// Check if embeddings are ready
if !storage.is_embedding_ready() {
return Ok(serde_json::json!({
"error": "Embedding service not ready",
"hint": "Run consolidation first to initialize embeddings, or the model may still be loading.",
}));
}
let results = storage
.semantic_search(
&args.query,
args.limit.unwrap_or(10).clamp(1, 50),
args.min_similarity.unwrap_or(0.5).clamp(0.0, 1.0),
)
.map_err(|e| e.to_string())?;
let formatted: Vec<Value> = results
.iter()
.map(|r| {
serde_json::json!({
"id": r.node.id,
"content": r.node.content,
"similarity": r.similarity,
"nodeType": r.node.node_type,
"tags": r.node.tags,
"retentionStrength": r.node.retention_strength,
})
})
.collect();
Ok(serde_json::json!({
"query": args.query,
"method": "semantic",
"total": formatted.len(),
"results": formatted,
}))
}
pub async fn execute_hybrid(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args: HybridSearchArgs = match args {
Some(v) => serde_json::from_value(v).map_err(|e| format!("Invalid arguments: {}", e))?,
None => return Err("Missing arguments".to_string()),
};
if args.query.trim().is_empty() {
return Err("Query cannot be empty".to_string());
}
let storage = storage.lock().await;
let results = storage
.hybrid_search(
&args.query,
args.limit.unwrap_or(10).clamp(1, 50),
args.keyword_weight.unwrap_or(0.5).clamp(0.0, 1.0),
args.semantic_weight.unwrap_or(0.5).clamp(0.0, 1.0),
)
.map_err(|e| e.to_string())?;
let formatted: Vec<Value> = results
.iter()
.map(|r| {
serde_json::json!({
"id": r.node.id,
"content": r.node.content,
"combinedScore": r.combined_score,
"keywordScore": r.keyword_score,
"semanticScore": r.semantic_score,
"matchType": format!("{:?}", r.match_type),
"nodeType": r.node.node_type,
"tags": r.node.tags,
"retentionStrength": r.node.retention_strength,
})
})
.collect();
Ok(serde_json::json!({
"query": args.query,
"method": "hybrid",
"total": formatted.len(),
"results": formatted,
}))
}

View file

@ -0,0 +1,123 @@
//! Stats Tools
//!
//! Memory statistics and health check.
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{MemoryStats, Storage};
/// Input schema for get_stats tool
pub fn stats_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {},
})
}
/// Input schema for health_check tool
pub fn health_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {},
})
}
pub async fn execute_stats(storage: &Arc<Mutex<Storage>>) -> Result<Value, String> {
let storage = storage.lock().await;
let stats = storage.get_stats().map_err(|e| e.to_string())?;
Ok(serde_json::json!({
"totalNodes": stats.total_nodes,
"nodesDueForReview": stats.nodes_due_for_review,
"averageRetention": stats.average_retention,
"averageStorageStrength": stats.average_storage_strength,
"averageRetrievalStrength": stats.average_retrieval_strength,
"oldestMemory": stats.oldest_memory.map(|d| d.to_rfc3339()),
"newestMemory": stats.newest_memory.map(|d| d.to_rfc3339()),
"nodesWithEmbeddings": stats.nodes_with_embeddings,
"embeddingModel": stats.embedding_model,
"embeddingServiceReady": storage.is_embedding_ready(),
}))
}
pub async fn execute_health(storage: &Arc<Mutex<Storage>>) -> Result<Value, String> {
let storage = storage.lock().await;
let stats = storage.get_stats().map_err(|e| e.to_string())?;
// Determine health status
let status = if stats.total_nodes == 0 {
"empty"
} else if stats.average_retention < 0.3 {
"critical"
} else if stats.average_retention < 0.5 {
"degraded"
} else {
"healthy"
};
let mut warnings = Vec::new();
if stats.average_retention < 0.5 && stats.total_nodes > 0 {
warnings.push("Low average retention - consider running consolidation or reviewing memories".to_string());
}
if stats.nodes_due_for_review > 10 {
warnings.push(format!("{} memories are due for review", stats.nodes_due_for_review));
}
if stats.total_nodes > 0 && stats.nodes_with_embeddings == 0 {
warnings.push("No embeddings generated - semantic search unavailable. Run consolidation.".to_string());
}
let embedding_coverage = if stats.total_nodes > 0 {
(stats.nodes_with_embeddings as f64 / stats.total_nodes as f64) * 100.0
} else {
0.0
};
if embedding_coverage < 50.0 && stats.total_nodes > 10 {
warnings.push(format!("Only {:.1}% of memories have embeddings", embedding_coverage));
}
Ok(serde_json::json!({
"status": status,
"totalNodes": stats.total_nodes,
"nodesDueForReview": stats.nodes_due_for_review,
"averageRetention": stats.average_retention,
"embeddingCoverage": format!("{:.1}%", embedding_coverage),
"embeddingServiceReady": storage.is_embedding_ready(),
"warnings": warnings,
"recommendations": get_recommendations(&stats, status),
}))
}
fn get_recommendations(
stats: &MemoryStats,
status: &str,
) -> Vec<String> {
let mut recommendations = Vec::new();
if status == "critical" {
recommendations.push("CRITICAL: Many memories have very low retention. Review important memories with 'mark_reviewed'.".to_string());
}
if stats.nodes_due_for_review > 5 {
recommendations.push("Review due memories to strengthen retention.".to_string());
}
if stats.nodes_with_embeddings < stats.total_nodes {
recommendations.push("Run 'run_consolidation' to generate embeddings for better semantic search.".to_string());
}
if stats.total_nodes > 100 && stats.average_retention < 0.7 {
recommendations.push("Consider running periodic consolidation to maintain memory health.".to_string());
}
if recommendations.is_empty() {
recommendations.push("Memory system is healthy!".to_string());
}
recommendations
}

View file

@ -0,0 +1,250 @@
//! Synaptic Tagging Tool
//!
//! Retroactive importance assignment based on Synaptic Tagging & Capture theory.
//! Frey & Morris (1997), Redondo & Morris (2011).
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::Mutex;
use vestige_core::{
CaptureWindow, ImportanceEvent, ImportanceEventType,
SynapticTaggingConfig, SynapticTaggingSystem, Storage,
};
/// Input schema for trigger_importance tool
pub fn trigger_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"event_type": {
"type": "string",
"enum": ["user_flag", "emotional", "novelty", "repeated_access", "cross_reference"],
"description": "Type of importance event"
},
"memory_id": {
"type": "string",
"description": "The memory that triggered the importance signal"
},
"description": {
"type": "string",
"description": "Description of why this is important (optional)"
},
"hours_back": {
"type": "number",
"description": "How many hours back to look for related memories (default: 9)"
},
"hours_forward": {
"type": "number",
"description": "How many hours forward to capture (default: 2)"
}
},
"required": ["event_type", "memory_id"]
})
}
/// Input schema for find_tagged tool
pub fn find_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {
"min_strength": {
"type": "number",
"description": "Minimum tag strength (0.0-1.0, default: 0.3)"
},
"limit": {
"type": "integer",
"description": "Maximum results (default: 20)"
}
},
"required": []
})
}
/// Input schema for tag_stats tool
pub fn stats_schema() -> Value {
serde_json::json!({
"type": "object",
"properties": {},
})
}
/// Trigger an importance event to retroactively strengthen recent memories
pub async fn execute_trigger(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args = args.ok_or("Missing arguments")?;
let event_type_str = args["event_type"]
.as_str()
.ok_or("event_type is required")?;
let memory_id = args["memory_id"]
.as_str()
.ok_or("memory_id is required")?;
let description = args["description"].as_str();
let hours_back = args["hours_back"].as_f64().unwrap_or(9.0);
let hours_forward = args["hours_forward"].as_f64().unwrap_or(2.0);
let storage = storage.lock().await;
// Verify the trigger memory exists
let trigger_memory = storage.get_node(memory_id)
.map_err(|e| format!("Error: {}", e))?
.ok_or("Memory not found")?;
// Create importance event based on type
let _event_type = match event_type_str {
"user_flag" => ImportanceEventType::UserFlag,
"emotional" => ImportanceEventType::EmotionalContent,
"novelty" => ImportanceEventType::NoveltySpike,
"repeated_access" => ImportanceEventType::RepeatedAccess,
"cross_reference" => ImportanceEventType::CrossReference,
_ => return Err(format!("Unknown event type: {}", event_type_str)),
};
// Create event using user_flag constructor (simpler API)
let event = ImportanceEvent::user_flag(memory_id, description);
// Configure capture window
let config = SynapticTaggingConfig {
capture_window: CaptureWindow::new(hours_back, hours_forward),
prp_threshold: 0.5,
tag_lifetime_hours: 12.0,
min_tag_strength: 0.1,
max_cluster_size: 100,
enable_clustering: true,
auto_decay: true,
cleanup_interval_hours: 1.0,
};
let mut stc = SynapticTaggingSystem::with_config(config);
// Get recent memories to tag
let recent = storage.get_all_nodes(100, 0)
.map_err(|e| e.to_string())?;
// Tag all recent memories
for mem in &recent {
stc.tag_memory(&mem.id);
}
// Trigger PRP (Plasticity-Related Proteins) synthesis
let result = stc.trigger_prp(event);
Ok(serde_json::json!({
"success": true,
"eventType": event_type_str,
"triggerMemory": {
"id": memory_id,
"content": trigger_memory.content
},
"captureWindow": {
"hoursBack": hours_back,
"hoursForward": hours_forward
},
"result": {
"memoriesCaptured": result.captured_count(),
"description": description
},
"explanation": format!(
"Importance signal triggered! {} memories within the {:.1}h window have been retroactively strengthened.",
result.captured_count(), hours_back
)
}))
}
/// Find memories with active synaptic tags
pub async fn execute_find(
storage: &Arc<Mutex<Storage>>,
args: Option<Value>,
) -> Result<Value, String> {
let args = args.unwrap_or(serde_json::json!({}));
let min_strength = args["min_strength"].as_f64().unwrap_or(0.3);
let limit = args["limit"].as_i64().unwrap_or(20) as usize;
let storage = storage.lock().await;
// Get memories with high retention (proxy for "tagged")
let memories = storage.get_all_nodes(200, 0)
.map_err(|e| e.to_string())?;
// Filter by retention strength (tagged memories have higher retention)
let tagged: Vec<Value> = memories.into_iter()
.filter(|m| m.retention_strength >= min_strength)
.take(limit)
.map(|m| serde_json::json!({
"id": m.id,
"content": m.content,
"retentionStrength": m.retention_strength,
"storageStrength": m.storage_strength,
"lastAccessed": m.last_accessed.to_rfc3339(),
"tags": m.tags
}))
.collect();
Ok(serde_json::json!({
"success": true,
"minStrength": min_strength,
"taggedCount": tagged.len(),
"memories": tagged
}))
}
/// Get synaptic tagging statistics
pub async fn execute_stats(
storage: &Arc<Mutex<Storage>>,
) -> Result<Value, String> {
let storage = storage.lock().await;
let memories = storage.get_all_nodes(500, 0)
.map_err(|e| e.to_string())?;
let total = memories.len();
let high_retention = memories.iter().filter(|m| m.retention_strength >= 0.7).count();
let medium_retention = memories.iter().filter(|m| m.retention_strength >= 0.4 && m.retention_strength < 0.7).count();
let low_retention = memories.iter().filter(|m| m.retention_strength < 0.4).count();
let avg_retention = if total > 0 {
memories.iter().map(|m| m.retention_strength).sum::<f64>() / total as f64
} else {
0.0
};
let avg_storage = if total > 0 {
memories.iter().map(|m| m.storage_strength).sum::<f64>() / total as f64
} else {
0.0
};
Ok(serde_json::json!({
"totalMemories": total,
"averageRetention": avg_retention,
"averageStorage": avg_storage,
"distribution": {
"highRetention": {
"count": high_retention,
"threshold": 0.7,
"percentage": if total > 0 { (high_retention as f64 / total as f64) * 100.0 } else { 0.0 }
},
"mediumRetention": {
"count": medium_retention,
"threshold": "0.4-0.7",
"percentage": if total > 0 { (medium_retention as f64 / total as f64) * 100.0 } else { 0.0 }
},
"lowRetention": {
"count": low_retention,
"threshold": "<0.4",
"percentage": if total > 0 { (low_retention as f64 / total as f64) * 100.0 } else { 0.0 }
}
},
"science": {
"theory": "Synaptic Tagging and Capture (Frey & Morris 1997)",
"principle": "Weak memories can be retroactively strengthened when important events occur within a temporal window",
"captureWindow": "Up to 9 hours in biological systems"
}
}))
}