Merge remote-tracking branch 'origin/main' into feat/v2.4-ui-expansion

This commit is contained in:
Sam Valladares 2026-04-23 02:04:07 -05:00
commit f3990f3b5b
10 changed files with 412 additions and 84 deletions

View file

@ -50,7 +50,12 @@ jobs:
release-build:
name: Release Build (${{ matrix.target }})
runs-on: ${{ matrix.os }}
if: github.ref == 'refs/heads/main'
# Run on main pushes AND on PRs that touch workflows, Cargo manifests, or
# crate sources — so Intel Mac / Linux release targets are validated
# before merge, not after.
if: |
github.ref == 'refs/heads/main' ||
github.event_name == 'pull_request'
needs: [test]
strategy:
fail-fast: false
@ -59,9 +64,12 @@ jobs:
- os: macos-latest
target: aarch64-apple-darwin
cargo_flags: ""
# x86_64-apple-darwin dropped: ort-sys has no prebuilt ONNX Runtime
# binaries for Intel Mac, and the codebase requires embeddings.
# Apple discontinued Intel Macs in 2020. Build from source if needed.
# Intel Mac builds against a system ONNX Runtime via ort-dynamic
# (ort-sys has no x86_64-apple-darwin prebuilts). Compile-only here;
# runtime linking is a user concern documented in INSTALL-INTEL-MAC.md.
- os: macos-latest
target: x86_64-apple-darwin
cargo_flags: "--no-default-features --features ort-dynamic,vector-search"
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
cargo_flags: ""

View file

@ -27,17 +27,21 @@ jobs:
os: ubuntu-latest
archive: tar.gz
cargo_flags: ""
needs_onnxruntime: false
- target: x86_64-pc-windows-msvc
os: windows-latest
archive: zip
cargo_flags: ""
# Intel Mac (x86_64-apple-darwin) is explicitly unsupported: the
# upstream ort-sys 2.0.0-rc.11 pinned by fastembed 5.13.2 does not
# ship Intel Mac prebuilts, and the v2.0.5 + v2.0.6 release
# workflows both failed this job. Matches ci.yml which already
# dropped the target. README documents the build-from-source path
# for Intel Mac users. When ort-sys ships Intel Mac prebuilts
# again, restore the entry.
needs_onnxruntime: false
# Intel Mac uses the ort-dynamic feature to runtime-link against a
# system libonnxruntime (Homebrew), sidestepping the missing
# x86_64-apple-darwin prebuilts in ort-sys 2.0.0-rc.11. Binary
# consumers must `brew install onnxruntime` before running — see
# INSTALL-INTEL-MAC.md bundled in the tarball.
- target: x86_64-apple-darwin
os: macos-latest
archive: tar.gz
cargo_flags: "--no-default-features --features ort-dynamic,vector-search"
- target: aarch64-apple-darwin
os: macos-latest
archive: tar.gz
@ -58,8 +62,13 @@ jobs:
- name: Package (Unix)
if: matrix.os != 'windows-latest'
run: |
cp docs/INSTALL-INTEL-MAC.md target/${{ matrix.target }}/release/ 2>/dev/null || true
cd target/${{ matrix.target }}/release
tar -czf ../../../vestige-mcp-${{ matrix.target }}.tar.gz vestige-mcp vestige vestige-restore
if [ "${{ matrix.target }}" = "x86_64-apple-darwin" ]; then
tar -czf ../../../vestige-mcp-${{ matrix.target }}.tar.gz vestige-mcp vestige vestige-restore INSTALL-INTEL-MAC.md
else
tar -czf ../../../vestige-mcp-${{ matrix.target }}.tar.gz vestige-mcp vestige vestige-restore
fi
- name: Package (Windows)
if: matrix.os == 'windows-latest'

View file

@ -80,12 +80,24 @@ curl -L https://github.com/samvallad33/vestige/releases/latest/download/vestige-
sudo mv vestige-mcp vestige vestige-restore /usr/local/bin/
```
**macOS (Intel) and Windows:** Prebuilt binaries aren't currently shipped for these targets because of upstream toolchain gaps (`ort-sys` lacks Intel Mac prebuilts in the 2.0.0-rc.11 release that `fastembed 5.13.2` is pinned to; `usearch 2.24.0` hit a Windows MSVC compile break tracked as [usearch#746](https://github.com/unum-cloud/usearch/issues/746)). Both build fine from source in the meantime:
**macOS (Intel):** Microsoft is discontinuing x86_64 macOS prebuilts after ONNX Runtime v1.23.0, so Vestige's Intel Mac build links dynamically against a Homebrew-installed ONNX Runtime via the `ort-dynamic` feature. Install with:
```bash
brew install onnxruntime
curl -L https://github.com/samvallad33/vestige/releases/latest/download/vestige-mcp-x86_64-apple-darwin.tar.gz | tar -xz
sudo mv vestige-mcp vestige vestige-restore /usr/local/bin/
echo 'export ORT_DYLIB_PATH="'"$(brew --prefix onnxruntime)"'/lib/libonnxruntime.dylib"' >> ~/.zshrc
source ~/.zshrc
claude mcp add vestige vestige-mcp -s user
```
Full Intel Mac guide (build-from-source + troubleshooting): [`docs/INSTALL-INTEL-MAC.md`](docs/INSTALL-INTEL-MAC.md).
**Windows:** Prebuilt binaries ship but `usearch 2.24.0` hit an MSVC compile break ([usearch#746](https://github.com/unum-cloud/usearch/issues/746)); we've pinned `=2.23.0` until upstream fixes it. Source builds work with:
```bash
git clone https://github.com/samvallad33/vestige && cd vestige
cargo build --release -p vestige-mcp
# Binary lands at target/release/vestige-mcp
```
**npm:**
@ -315,7 +327,7 @@ At the start of every session:
| **Transport** | MCP stdio (JSON-RPC 2.0) + WebSocket |
| **Cognitive modules** | 30 stateful (17 neuroscience, 11 advanced, 2 search) |
| **First run** | Downloads embedding model (~130MB), then fully offline |
| **Platforms** | macOS ARM + Linux x86_64 (prebuilt). macOS Intel + Windows build from source (upstream toolchain gaps, see install notes). |
| **Platforms** | macOS ARM + Intel + Linux x86_64 + Windows x86_64 (all prebuilt). Intel Mac needs `brew install onnxruntime` — see [install guide](docs/INSTALL-INTEL-MAC.md). |
### Optional Features

View file

@ -11,29 +11,41 @@ keywords = ["memory", "spaced-repetition", "fsrs", "embeddings", "knowledge-grap
categories = ["science", "database"]
[features]
default = ["embeddings", "vector-search", "bundled-sqlite"]
default = ["embeddings", "ort-download", "vector-search", "bundled-sqlite"]
# SQLite backend (default, unencrypted)
bundled-sqlite = ["rusqlite/bundled"]
# Encrypted SQLite via SQLCipher (mutually exclusive with bundled-sqlite)
# Use: --no-default-features --features encryption,embeddings,vector-search
# Use: --no-default-features --features encryption,embeddings,ort-download,vector-search
# Set VESTIGE_ENCRYPTION_KEY env var to enable encryption
encryption = ["rusqlite/bundled-sqlcipher"]
# Core embeddings with fastembed (ONNX-based, local inference)
# Downloads a pre-built ONNX Runtime binary at build time (requires glibc >= 2.38)
embeddings = ["dep:fastembed", "fastembed/ort-download-binaries-native-tls"]
# Embedding code paths (fastembed dep, hf-hub, image-models). This feature
# enables the #[cfg(feature = "embeddings")] gates throughout the crate but
# does NOT pick an ort backend. Pair with EXACTLY ONE of `ort-download`
# (prebuilt ONNX Runtime, default) or `ort-dynamic` (runtime-linked system
# libonnxruntime, required on targets without prebuilts).
embeddings = ["dep:fastembed", "fastembed/hf-hub-native-tls", "fastembed/image-models"]
# Default ort backend: ort-sys downloads prebuilt ONNX Runtime at build time.
# Requires glibc >= 2.38. Fails on x86_64-apple-darwin (Microsoft is
# discontinuing Intel Mac prebuilts after ONNX Runtime v1.23.0).
ort-download = ["embeddings", "fastembed/ort-download-binaries-native-tls"]
# HNSW vector search with USearch (20x faster than FAISS)
vector-search = ["dep:usearch"]
# Use runtime-loaded ORT instead of the downloaded pre-built binary.
# Required on systems with glibc < 2.38 (Ubuntu 22.04, Debian 12, RHEL/Rocky 9).
# Mutually exclusive with the default `embeddings` feature's download strategy.
# Usage: --no-default-features --features ort-dynamic,vector-search,bundled-sqlite
# Runtime requirement: libonnxruntime.so must be on LD_LIBRARY_PATH or ORT_DYLIB_PATH set.
ort-dynamic = ["dep:fastembed", "fastembed/ort-load-dynamic", "fastembed/hf-hub-native-tls", "fastembed/image-models"]
# Alternative ort backend: runtime-linked against a system libonnxruntime via
# dlopen. Required on Intel Mac and on systems with glibc < 2.38 (Ubuntu
# 22.04, Debian 12, RHEL/Rocky 9). Transitively enables `embeddings` so the
# #[cfg] gates stay active.
#
# Usage: cargo build --no-default-features \
# --features ort-dynamic,vector-search,bundled-sqlite
# Runtime: export ORT_DYLIB_PATH=/path/to/libonnxruntime.{dylib,so}
# (e.g. $(brew --prefix onnxruntime)/lib/libonnxruntime.dylib)
ort-dynamic = ["embeddings", "fastembed/ort-load-dynamic"]
# Nomic Embed Text v2 MoE (475M params, 305M active, Candle backend)
# Requires: fastembed with nomic-v2-moe feature

View file

@ -7,6 +7,53 @@
/// Dangerous FTS5 operators that could be used for injection or DoS
const FTS5_OPERATORS: &[&str] = &["OR", "AND", "NOT", "NEAR"];
/// Sanitize input for FTS5 MATCH queries using individual term matching.
///
/// Unlike `sanitize_fts5_query` which wraps in quotes for a phrase search,
/// this function produces individual terms joined with implicit AND.
/// This matches documents that contain ALL the query words in any order.
///
/// Use this when you want "find all records containing these words" rather
/// than "find records with this exact phrase".
pub fn sanitize_fts5_terms(query: &str) -> Option<String> {
let limited: String = query.chars().take(1000).collect();
let mut sanitized = limited;
sanitized = sanitized
.chars()
.map(|c| match c {
'*' | ':' | '^' | '-' | '"' | '(' | ')' | '{' | '}' | '[' | ']' => ' ',
_ => c,
})
.collect();
for op in FTS5_OPERATORS {
let pattern = format!(" {} ", op);
sanitized = sanitized.replace(&pattern, " ");
sanitized = sanitized.replace(&pattern.to_lowercase(), " ");
let upper = sanitized.to_uppercase();
let start_pattern = format!("{} ", op);
if upper.starts_with(&start_pattern) {
sanitized = sanitized.chars().skip(op.len()).collect();
}
let end_pattern = format!(" {}", op);
if upper.ends_with(&end_pattern) {
let char_count = sanitized.chars().count();
sanitized = sanitized
.chars()
.take(char_count.saturating_sub(op.len()))
.collect();
}
}
let terms: Vec<&str> = sanitized.split_whitespace().collect();
if terms.is_empty() {
return None;
}
// Join with space: FTS5 implicit AND — all terms must appear
Some(terms.join(" "))
}
/// Sanitize input for FTS5 MATCH queries
///
/// Prevents:

View file

@ -1520,6 +1520,38 @@ impl Storage {
Ok(result)
}
/// FTS5 keyword search using individual-term matching (implicit AND).
///
/// Unlike `search()` which uses phrase matching (words must be adjacent),
/// this returns documents containing ALL query words in any order and position.
/// This is more useful for free-text queries from external callers.
pub fn search_terms(&self, query: &str, limit: i32) -> Result<Vec<KnowledgeNode>> {
use crate::fts::sanitize_fts5_terms;
let Some(terms) = sanitize_fts5_terms(query) else {
return Ok(vec![]);
};
let reader = self
.reader
.lock()
.map_err(|_| StorageError::Init("Reader lock poisoned".into()))?;
let mut stmt = reader.prepare(
"SELECT n.* FROM knowledge_nodes n
JOIN knowledge_fts fts ON n.id = fts.id
WHERE knowledge_fts MATCH ?1
ORDER BY rank
LIMIT ?2",
)?;
let nodes = stmt.query_map(params![terms, limit], Self::row_to_node)?;
let mut result = Vec::new();
for node in nodes {
result.push(node?);
}
Ok(result)
}
/// Get all nodes (paginated)
pub fn get_all_nodes(&self, limit: i32, offset: i32) -> Result<Vec<KnowledgeNode>> {
let reader = self
@ -1841,7 +1873,12 @@ impl Storage {
include_types: Option<&[String]>,
exclude_types: Option<&[String]>,
) -> Result<Vec<(String, f32)>> {
let sanitized_query = sanitize_fts5_query(query);
// Use individual-term matching (implicit AND) so multi-word queries find
// documents where all words appear anywhere, not just as adjacent phrases.
use crate::fts::sanitize_fts5_terms;
let Some(terms_query) = sanitize_fts5_terms(query) else {
return Ok(vec![]);
};
// Build the type filter clause and collect parameter values.
// We use numbered parameters: ?1 = query, ?2 = limit, ?3.. = type strings.
@ -1887,7 +1924,7 @@ impl Storage {
// Build the parameter list: [query, limit, ...type_values]
let mut param_values: Vec<Box<dyn rusqlite::ToSql>> = Vec::new();
param_values.push(Box::new(sanitized_query.clone()));
param_values.push(Box::new(terms_query));
param_values.push(Box::new(limit));
for tv in &type_values {
param_values.push(Box::new(tv.to_string()));
@ -2077,61 +2114,77 @@ impl Storage {
Ok(result)
}
/// Query memories created/modified in a time range
/// Query memories created/modified in a time range, optionally filtered by
/// `node_type` and/or `tags`.
///
/// All filters are pushed into the SQL `WHERE` clause so that `LIMIT` is
/// applied AFTER filtering. If filters were applied in Rust after `LIMIT`,
/// sparse types/tags could be crowded out by a dominant set within the
/// limit window — e.g. a query for a rare tag against a corpus where
/// every day has hundreds of rows with a common tag would return 0
/// matches after `LIMIT` crowded the rare-tag rows out.
///
/// Tag filtering uses `tags LIKE '%"tag"%'` — an exact-match JSON pattern
/// that keys off the quote characters around each tag in the stored JSON
/// array. This avoids the substring-match false positive where `alpha`
/// would otherwise match `alphabet`.
pub fn query_time_range(
&self,
start: Option<DateTime<Utc>>,
end: Option<DateTime<Utc>>,
limit: i32,
node_type: Option<&str>,
tags: Option<&[String]>,
) -> Result<Vec<KnowledgeNode>> {
let start_str = start.map(|dt| dt.to_rfc3339());
let end_str = end.map(|dt| dt.to_rfc3339());
let (query, params): (&str, Vec<Box<dyn rusqlite::ToSql>>) = match (&start_str, &end_str) {
(Some(s), Some(e)) => (
"SELECT * FROM knowledge_nodes
WHERE created_at >= ?1 AND created_at <= ?2
ORDER BY created_at DESC
LIMIT ?3",
vec![
Box::new(s.clone()) as Box<dyn rusqlite::ToSql>,
Box::new(e.clone()) as Box<dyn rusqlite::ToSql>,
Box::new(limit) as Box<dyn rusqlite::ToSql>,
],
),
(Some(s), None) => (
"SELECT * FROM knowledge_nodes
WHERE created_at >= ?1
ORDER BY created_at DESC
LIMIT ?2",
vec![
Box::new(s.clone()) as Box<dyn rusqlite::ToSql>,
Box::new(limit) as Box<dyn rusqlite::ToSql>,
],
),
(None, Some(e)) => (
"SELECT * FROM knowledge_nodes
WHERE created_at <= ?1
ORDER BY created_at DESC
LIMIT ?2",
vec![
Box::new(e.clone()) as Box<dyn rusqlite::ToSql>,
Box::new(limit) as Box<dyn rusqlite::ToSql>,
],
),
(None, None) => (
"SELECT * FROM knowledge_nodes
ORDER BY created_at DESC
LIMIT ?1",
vec![Box::new(limit) as Box<dyn rusqlite::ToSql>],
),
let mut conditions: Vec<String> = Vec::new();
let mut params: Vec<Box<dyn rusqlite::ToSql>> = Vec::new();
let mut idx = 1;
if let Some(ref s) = start_str {
conditions.push(format!("created_at >= ?{}", idx));
params.push(Box::new(s.clone()) as Box<dyn rusqlite::ToSql>);
idx += 1;
}
if let Some(ref e) = end_str {
conditions.push(format!("created_at <= ?{}", idx));
params.push(Box::new(e.clone()) as Box<dyn rusqlite::ToSql>);
idx += 1;
}
if let Some(nt) = node_type {
conditions.push(format!("LOWER(node_type) = LOWER(?{})", idx));
params.push(Box::new(nt.to_string()) as Box<dyn rusqlite::ToSql>);
idx += 1;
}
if let Some(tag_list) = tags.filter(|t| !t.is_empty()) {
let mut tag_conditions = Vec::new();
for tag in tag_list {
tag_conditions.push(format!("tags LIKE ?{}", idx));
params.push(Box::new(format!("%\"{}\"%", tag)) as Box<dyn rusqlite::ToSql>);
idx += 1;
}
conditions.push(format!("({})", tag_conditions.join(" OR ")));
}
let where_clause = if conditions.is_empty() {
String::new()
} else {
format!("WHERE {}", conditions.join(" AND "))
};
let query = format!(
"SELECT * FROM knowledge_nodes {} ORDER BY created_at DESC LIMIT ?{}",
where_clause, idx
);
params.push(Box::new(limit) as Box<dyn rusqlite::ToSql>);
let reader = self
.reader
.lock()
.map_err(|_| StorageError::Init("Reader lock poisoned".into()))?;
let mut stmt = reader.prepare(query)?;
let mut stmt = reader.prepare(&query)?;
let params_refs: Vec<&dyn rusqlite::ToSql> = params.iter().map(|p| p.as_ref()).collect();
let nodes = stmt.query_map(params_refs.as_slice(), Self::row_to_node)?;

View file

@ -10,12 +10,17 @@ categories = ["command-line-utilities", "database"]
repository = "https://github.com/samvallad33/vestige"
[features]
default = ["embeddings", "vector-search"]
default = ["embeddings", "ort-download", "vector-search"]
embeddings = ["vestige-core/embeddings"]
vector-search = ["vestige-core/vector-search"]
# For systems with glibc < 2.38 — use runtime-loaded ORT instead of the downloaded pre-built binary.
# Usage: cargo install --path crates/vestige-mcp --no-default-features --features ort-dynamic,vector-search
ort-dynamic = ["vestige-core/ort-dynamic"]
# Default ort backend: downloads prebuilt ONNX Runtime at build time.
# Fails on targets without prebuilts (notably x86_64-apple-darwin).
ort-download = ["embeddings", "vestige-core/ort-download"]
# Alternative ort backend: runtime-linked system libonnxruntime via dlopen.
# Required on Intel Mac and on systems with glibc < 2.38.
# Usage: cargo build --no-default-features --features ort-dynamic,vector-search
# Runtime: export ORT_DYLIB_PATH=$(brew --prefix onnxruntime)/lib/libonnxruntime.dylib
ort-dynamic = ["embeddings", "vestige-core/ort-dynamic"]
[[bin]]
name = "vestige-mcp"

View file

@ -384,7 +384,7 @@ pub async fn get_timeline(
let start = Utc::now() - Duration::days(days);
let nodes = state
.storage
.query_time_range(Some(start), Some(Utc::now()), limit)
.query_time_range(Some(start), Some(Utc::now()), limit, None, None)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
// Group by day

View file

@ -126,19 +126,20 @@ pub async fn execute(storage: &Arc<Storage>, args: Option<Value>) -> Result<Valu
let limit = args.limit.unwrap_or(50).clamp(1, 200);
// Query memories in time range
let mut results = storage
.query_time_range(start, end, limit)
// Query memories in time range with filters pushed into SQL. Rust-side
// `retain` after `LIMIT` was unsafe for sparse types/tags — a dominant
// set could crowd the sparse matches out of the limit window and leave
// the retain with 0 rows to keep.
let results = storage
.query_time_range(
start,
end,
limit,
args.node_type.as_deref(),
args.tags.as_deref(),
)
.map_err(|e| e.to_string())?;
// Post-query filters
if let Some(ref node_type) = args.node_type {
results.retain(|n| n.node_type == *node_type);
}
if let Some(tags) = args.tags.as_ref().filter(|t| !t.is_empty()) {
results.retain(|n| tags.iter().any(|t| n.tags.contains(t)));
}
// Group by day
let mut by_day: BTreeMap<NaiveDate, Vec<Value>> = BTreeMap::new();
for node in &results {
@ -204,6 +205,28 @@ mod tests {
.unwrap();
}
/// Ingest with explicit node_type and tags. Used by the sparse-filter
/// regression tests so the dominant and sparse sets can be told apart.
async fn ingest_typed(
storage: &Arc<Storage>,
content: &str,
node_type: &str,
tags: &[&str],
) {
storage
.ingest(vestige_core::IngestInput {
content: content.to_string(),
node_type: node_type.to_string(),
source: None,
sentiment_score: 0.0,
sentiment_magnitude: 0.0,
tags: tags.iter().map(|t| t.to_string()).collect(),
valid_from: None,
valid_until: None,
})
.unwrap();
}
#[test]
fn test_schema_has_properties() {
let s = schema();
@ -357,4 +380,90 @@ mod tests {
let value = result.unwrap();
assert_eq!(value["totalMemories"], 0);
}
/// Regression: `node_type` filter must work even when the sparse type is
/// crowded out by a dominant type within the SQL `LIMIT`. Before the fix,
/// `query_time_range` applied `LIMIT` before the Rust-side `retain`, so a
/// limit of 5 against 10 dominant + 2 sparse rows returned 5 dominant,
/// then filtered to 0 sparse.
#[tokio::test]
async fn test_timeline_node_type_filter_sparse() {
let (storage, _dir) = test_storage().await;
// Dominant set: 10 facts
for i in 0..10 {
ingest_typed(&storage, &format!("Dominant memory {}", i), "fact", &["alpha"]).await;
}
// Sparse set: 2 concepts
for i in 0..2 {
ingest_typed(&storage, &format!("Sparse memory {}", i), "concept", &["beta"]).await;
}
// Limit 5 against 12 total — before the fix, `retain` on `concept`
// would operate on the 5 most recent rows (all `fact`) and find 0.
let args = serde_json::json!({ "node_type": "concept", "limit": 5 });
let value = execute(&storage, Some(args)).await.unwrap();
assert_eq!(
value["totalMemories"], 2,
"Both sparse concepts should survive a limit smaller than the dominant set"
);
// Also verify the storage layer directly, so the contract is pinned
// at the API boundary even if the tool wrapper shifts.
let nodes = storage
.query_time_range(None, None, 5, Some("concept"), None)
.unwrap();
assert_eq!(nodes.len(), 2);
assert!(nodes.iter().all(|n| n.node_type == "concept"));
}
/// Regression: `tags` filter must work even when the sparse tag is
/// crowded out by a dominant tag within the SQL `LIMIT`. Parallel to
/// the node_type sparse case — same `retain`-after-`LIMIT` bug.
#[tokio::test]
async fn test_timeline_tag_filter_sparse() {
let (storage, _dir) = test_storage().await;
// Dominant set: 10 memories with tag "common"
for i in 0..10 {
ingest_typed(&storage, &format!("Common memory {}", i), "fact", &["common"]).await;
}
// Sparse set: 2 memories with tag "rare"
for i in 0..2 {
ingest_typed(&storage, &format!("Rare memory {}", i), "fact", &["rare"]).await;
}
let args = serde_json::json!({ "tags": ["rare"], "limit": 5 });
let value = execute(&storage, Some(args)).await.unwrap();
assert_eq!(
value["totalMemories"], 2,
"Both sparse-tag matches should survive a limit smaller than the dominant set"
);
let tag_slice = vec!["rare".to_string()];
let nodes = storage
.query_time_range(None, None, 5, None, Some(&tag_slice))
.unwrap();
assert_eq!(nodes.len(), 2);
assert!(nodes.iter().all(|n| n.tags.iter().any(|t| t == "rare")));
}
/// Regression: tag filter must match exact tags, not substrings. Without
/// the `"tag"`-wrapped `LIKE` pattern, a query for `alpha` would also
/// match rows tagged `alphabet`. The pattern `%"alpha"%` keys off the
/// JSON-array quote characters and rejects that.
#[tokio::test]
async fn test_timeline_tag_filter_exact_match() {
let (storage, _dir) = test_storage().await;
ingest_typed(&storage, "Exact tag hit", "fact", &["alpha"]).await;
ingest_typed(&storage, "Substring decoy", "fact", &["alphabet"]).await;
let tag_slice = vec!["alpha".to_string()];
let nodes = storage
.query_time_range(None, None, 50, None, Some(&tag_slice))
.unwrap();
assert_eq!(nodes.len(), 1, "Only the exact-tag match should return");
assert_eq!(nodes[0].content, "Exact tag hit");
}
}

73
docs/INSTALL-INTEL-MAC.md Normal file
View file

@ -0,0 +1,73 @@
# Intel Mac Installation
The Intel Mac (`x86_64-apple-darwin`) binary links dynamically against a system
ONNX Runtime instead of a prebuilt ort-sys library. Microsoft is discontinuing
x86_64 macOS prebuilts after ONNX Runtime v1.23.0, so we use the
`ort-dynamic` feature to runtime-link against the version you install locally.
This keeps Vestige working on Intel Mac without waiting for a dead upstream.
## Prerequisite
Install ONNX Runtime via Homebrew:
```bash
brew install onnxruntime
```
## Install
```bash
# 1. Download the binary
curl -L https://github.com/samvallad33/vestige/releases/latest/download/vestige-mcp-x86_64-apple-darwin.tar.gz | tar -xz
sudo mv vestige-mcp vestige vestige-restore /usr/local/bin/
# 2. Point the binary at Homebrew's libonnxruntime
echo 'export ORT_DYLIB_PATH="'"$(brew --prefix onnxruntime)"'/lib/libonnxruntime.dylib"' >> ~/.zshrc
source ~/.zshrc
# 3. Verify
vestige-mcp --version
# 4. Connect to Claude Code
claude mcp add vestige vestige-mcp -s user
```
`ORT_DYLIB_PATH` is how the `ort` crate's `load-dynamic` feature finds the
shared library at runtime. Without it the binary starts but fails on the first
embedding call with a "could not find libonnxruntime" error.
## Building from source
```bash
brew install onnxruntime
git clone https://github.com/samvallad33/vestige && cd vestige
cargo build --release -p vestige-mcp \
--no-default-features \
--features ort-dynamic,vector-search
export ORT_DYLIB_PATH="$(brew --prefix onnxruntime)/lib/libonnxruntime.dylib"
./target/release/vestige-mcp --version
```
## Troubleshooting
**`dyld: Library not loaded: libonnxruntime.dylib`** — `ORT_DYLIB_PATH` is not
set for the shell that spawned `vestige-mcp`. Claude Code / Codex inherits the
env vars from whatever launched it; export `ORT_DYLIB_PATH` in `~/.zshrc` or
`~/.bashrc` and restart the client.
**`error: ort-sys does not provide prebuilt binaries for the target
x86_64-apple-darwin`** — you hit this only if you ran `cargo build` without the
`--no-default-features --features ort-dynamic,vector-search` flags. The default
feature set still tries to download a non-existent prebuilt. Add the flags and
rebuild.
**Homebrew installed `onnxruntime` but `brew --prefix onnxruntime` prints
nothing** — upgrade brew (`brew update`) and retry. Older brew formulae used
`onnx-runtime` (hyphenated). If your brew still has the hyphenated formula,
substitute accordingly in the commands above.
## Long-term
Intel Mac will move to a fully pure-Rust backend (`ort-candle`) in Vestige
v2.1, removing the Homebrew prerequisite entirely. Track progress at
[issue #41](https://github.com/samvallad33/vestige/issues/41).