mirror of
https://github.com/samvallad33/vestige.git
synced 2026-05-08 15:22:37 +02:00
Four internal optimizations for dramatically better performance: 1. F16 vector quantization (ScalarKind::F16 in USearch) — 2x storage savings 2. Matryoshka 256-dim truncation (768→256) — 3x embedding storage savings 3. Convex Combination fusion (0.3 keyword / 0.7 semantic) replacing RRF 4. Cross-encoder reranker (Jina Reranker v1 Turbo via fastembed TextRerank) Combined: 6x vector storage reduction, ~20% better retrieval quality. Cross-encoder loads in background — server starts instantly. Old 768-dim embeddings auto-migrated on load. 614 tests pass, zero warnings.
37 lines
673 B
JSON
37 lines
673 B
JSON
{
|
|
"name": "vestige-mcp-server",
|
|
"version": "1.6.0",
|
|
"description": "Vestige MCP Server - AI Memory System for Claude and other assistants",
|
|
"bin": {
|
|
"vestige-mcp": "bin/vestige-mcp.js",
|
|
"vestige": "bin/vestige.js"
|
|
},
|
|
"scripts": {
|
|
"postinstall": "node scripts/postinstall.js"
|
|
},
|
|
"keywords": [
|
|
"mcp",
|
|
"claude",
|
|
"ai",
|
|
"memory",
|
|
"vestige"
|
|
],
|
|
"author": "Sam Valladares",
|
|
"license": "MIT",
|
|
"repository": {
|
|
"type": "git",
|
|
"url": "git+https://github.com/samvallad33/vestige.git"
|
|
},
|
|
"engines": {
|
|
"node": ">=18"
|
|
},
|
|
"os": [
|
|
"darwin",
|
|
"linux",
|
|
"win32"
|
|
],
|
|
"cpu": [
|
|
"x64",
|
|
"arm64"
|
|
]
|
|
}
|