Implements full explainability pipeline for GraphRAG queries, enabling
traceability from answers back to source documents.
Renamed throughout for clarity:
- provenance_callback → explain_callback
- provenance_id → explain_id
- provenance_collection → explain_collection
- message_type "provenance" → "explain"
- Queue name "provenance" → "explainability"
GraphRAG queries now emit explainability events as they execute:
1. Session - query text and timestamp
2. Retrieval - edges retrieved from subgraph
3. Selection - selected edges with LLM reasoning (JSONL with id +
reasoning)
4. Answer - reference to synthesized response
Events stream via explain_callback during query(), enabling
real-time UX.
- Answers stored in librarian service (not inline in graph - too large)
- Document ID as URN: urn:trustgraph:answer:{session_id}
- Graph stores tg:document reference (IRI) to librarian document
- Added librarian producer/consumer to graph-rag service
- get_labelgraph() now returns (labeled_edges, uri_map)
- uri_map maps edge_id(label_s, label_p, label_o) →
(uri_s, uri_p, uri_o)
- Explainability data stores original URIs, not labels
- Enables tracing edges back to reifying statements via tg:reifies
- Added serialize_triple() to query service (matches storage format)
- get_term_value() now handles TRIPLE type terms
- Enables querying by quoted triple in object position:
?stmt tg:reifies <<s p o>>
- Displays real-time explainability events during query
- Resolves rdfs:label for edge components (s, p, o)
- Traces source chain via prov:wasDerivedFrom to root document
- Output: "Source: Chunk 1 → Page 2 → Document Title"
- Label caching to avoid repeated queries
GraphRagResponse:
- explain_id: str | None
- explain_collection: str | None
- message_type: str ("chunk" or "explain")
- end_of_session: bool
trustgraph-base/trustgraph/provenance/:
- namespaces.py - Added TG_DOCUMENT predicate
- triples.py - answer_triples() supports document_id reference
- uris.py - Added edge_selection_uri()
trustgraph-base/trustgraph/schema/services/retrieval.py:
- GraphRagResponse with explain_id, explain_collection, end_of_session
trustgraph-flow/trustgraph/retrieval/graph_rag/:
- graph_rag.py - URI preservation, streaming answer accumulation
- rag.py - Librarian integration, real-time explain emission
trustgraph-flow/trustgraph/query/triples/cassandra/service.py:
- Quoted triple serialization for query matching
trustgraph-cli/trustgraph/cli/invoke_graph_rag.py:
- Full explainability display with label resolution and source tracing
|
||
|---|---|---|
| .github/workflows | ||
| containers | ||
| docs | ||
| specs | ||
| test-api | ||
| tests | ||
| tests.manual | ||
| trustgraph | ||
| trustgraph-base | ||
| trustgraph-bedrock | ||
| trustgraph-cli | ||
| trustgraph-embeddings-hf | ||
| trustgraph-flow | ||
| trustgraph-mcp | ||
| trustgraph-ocr | ||
| trustgraph-vertexai | ||
| .coveragerc | ||
| .gitignore | ||
| check_imports.py | ||
| context7.json | ||
| DEVELOPER_GUIDE.md | ||
| install_packages.sh | ||
| LICENSE | ||
| Makefile | ||
| ontology-prompt.md | ||
| product-platform-diagram.svg | ||
| prompt.txt | ||
| README.md | ||
| requirements.txt | ||
| run_tests.sh | ||
| schema.ttl | ||
| TEST_CASES.md | ||
| TEST_SETUP.md | ||
| TEST_STRATEGY.md | ||
| TESTS.md | ||
| TG-fullname-logo.svg | ||
| TG-hero-diagram.svg | ||
Durable agent memory you can trust. Build, version, and retrieve grounded context from a context graph.
- Give agents memory that persists across sessions and deployments.
- Reduce hallucinations with grounded context retrieval
- Ship reusable, portable Context Cores (packaged context you can move between projects/environments).
The context backend:
- Multi-model and multimodal database system
- Tabular/relational, key-value
- Document, graph, and vectors
- Images, video, and audio
- Automated data ingest and loading
- Quick ingest with semantic similarity retrieval
- Ontology structuring for precision retrieval
- Out-of-the-box RAG pipelines
- DocumentRAG
- GraphRAG
- OntologyRAG
- 3D GraphViz for exploring context
- Fully Agentic System
- Single Agent
- Multi Agent
- MCP integration
- Run anywhere
- Deploy locally with Docker
- Deploy in cloud with Kubernetes
- Support for all major LLMs
- API support for Anthropic, Cohere, Gemini, Mistral, OpenAI, and others
- Model inferencing with vLLM, Ollama, TGI, LM Studio, and Llamafiles
- Developer friendly
Quickstart
npx @trustgraph/config
TrustGraph downloads as Docker containers and can be run locally with Docker, Podman, or Minikube. The config tool will generate:
deploy.zipwith either adocker-compose.yamlfile for a Docker/Podman deploy orresources.yamlfor Kubernetes- Deployment instructions as
INSTALLATION.md
Table of Contents
What is a Context Graph?
Why TrustGraph?
Getting Started with TrustGraph
Watch TrustGraph 101
Workbench
The Workbench provides tools for all major features of TrustGraph. The Workbench is on port 8888 by default.
- Vector Search: Search the installed knowledge bases
- Agentic, GraphRAG and LLM Chat: Chat interface for agents, GraphRAG queries, or direct to LLMs
- Relationships: Analyze deep relationships in the installed knowledge bases
- Graph Visualizer: 3D GraphViz of the installed knowledge bases
- Library: Staging area for installing knowledge bases
- Flow Classes: Workflow preset configurations
- Flows: Create custom workflows and adjust LLM parameters during runtime
- Knowledge Cores: Manage resuable knowledge bases
- Prompts: Manage and adjust prompts during runtime
- Schemas: Define custom schemas for structured data knowledge bases
- Ontologies: Define custom ontologies for unstructured data knowledge bases
- Agent Tools: Define tools with collections, knowledge cores, MCP connections, and tool groups
- MCP Tools: Connect to MCP servers
TypeScript Library for UIs
There are 3 libraries for quick UI integration of TrustGraph services.
Context Cores
A Context Core is a portable, versioned bundle of context that you can ship between projects and environments, pin in production, and reuse across agents. It packages the “stuff agents need to know” (structured knowledge + embeddings + evidence + policies) into a single artifact, so you can treat context like code: build it, test it, version it, promote it, and roll it back. TrustGraph is built to support this kind of end-to-end context engineering and orchestration workflow.
What’s inside a Context Core
A Context Core typically includes:
- Ontology (your domain schema) and mappings
- Context Graph (entities, relationships, supporting evidence)
- Embeddings / vector indexes for fast semantic entry-point lookup
- Source manifests + provenance (where facts came from, when, and how they were derived)
- Retrieval policies (traversal rules, freshness, authority ranking)
Tech Stack
TrustGraph provides component flexibility to optimize agent workflows.
LLM APIs
- Anthropic
- AWS Bedrock
- AzureAI
- AzureOpenAI
- Cohere
- Google AI Studio
- Google VertexAI
- Mistral
- OpenAI
LLM Orchestration
- LM Studio
- Llamafiles
- Ollama
- TGI
- vLLM
Graph Storage
- Apache Cassandra (default)
- Neo4j
- Memgraph
- FalkorDB
VectorDBs
- Qdrant (default)
- Pinecone
- Milvus
File and Object Storage
- Garage (default)
- MinIO
Observability
- Prometheus
- Grafana
Data Streaming
- Apache Pulsar
Clouds
- AWS
- Azure
- Google Cloud
- OVHcloud
- Scaleway
Observability & Telemetry
Once the platform is running, access the Grafana dashboard at:
http://localhost:3000
Default credentials are:
user: admin
password: admin
The default Grafana dashboard tracks the following:
Telemetry
- LLM Latency
- Error Rate
- Service Request Rates
- Queue Backlogs
- Chunking Histogram
- Error Source by Service
- Rate Limit Events
- CPU usage by Service
- Memory usage by Service
- Models Deployed
- Token Throughput (Tokens/second)
- Cost Throughput (Cost/second)
Contributing
License
TrustGraph is licensed under Apache 2.0.
Copyright 2024-2025 TrustGraph
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.


