mirror of
https://github.com/samvallad33/vestige.git
synced 2026-04-25 16:56:21 +02:00
Four internal optimizations for dramatically better performance: 1. F16 vector quantization (ScalarKind::F16 in USearch) — 2x storage savings 2. Matryoshka 256-dim truncation (768→256) — 3x embedding storage savings 3. Convex Combination fusion (0.3 keyword / 0.7 semantic) replacing RRF 4. Cross-encoder reranker (Jina Reranker v1 Turbo via fastembed TextRerank) Combined: 6x vector storage reduction, ~20% better retrieval quality. Cross-encoder loads in background — server starts instantly. Old 768-dim embeddings auto-migrated on load. 614 tests pass, zero warnings.
24 lines
592 B
JSON
24 lines
592 B
JSON
{
|
|
"name": "vestige",
|
|
"version": "1.6.0",
|
|
"private": true,
|
|
"description": "Cognitive memory for AI - MCP server with FSRS-6 spaced repetition",
|
|
"author": "Sam Valladares",
|
|
"license": "MIT OR Apache-2.0",
|
|
"repository": {
|
|
"type": "git",
|
|
"url": "https://github.com/samvallad33/vestige"
|
|
},
|
|
"scripts": {
|
|
"build:mcp": "cargo build --release --package vestige-mcp",
|
|
"test": "cargo test --workspace",
|
|
"lint": "cargo clippy -- -D warnings",
|
|
"fmt": "cargo fmt"
|
|
},
|
|
"devDependencies": {
|
|
"typescript": "^5.9.3"
|
|
},
|
|
"engines": {
|
|
"node": ">=18"
|
|
}
|
|
}
|