* Added tech spec
* Add provenance recording to React agent loop
Enables agent sessions to be traced and debugged using the same
explainability infrastructure as GraphRAG. Agent traces record:
- Session start with query and timestamp
- Each iteration's thought, action, arguments, and observation
- Final answer with derivation chain
Changes:
- Add session_id and collection fields to AgentRequest schema
- Add agent predicates (TG_THOUGHT, TG_ACTION, etc.) to namespaces
- Create agent provenance triple generators in provenance/agent.py
- Register explainability producer in agent service
- Emit provenance triples during agent execution
- Update CLI tools to detect and render agent traces alongside GraphRAG
* Updated explainability taxonomy:
GraphRAG: tg:Question → tg:Exploration → tg:Focus → tg:Synthesis
Agent: tg:Question → tg:Analysis(s) → tg:Conclusion
All entities also have their PROV-O type (prov:Activity or prov:Entity).
Updated commit message:
Add provenance recording to React agent loop
Enables agent sessions to be traced and debugged using the same
explainability infrastructure as GraphRAG.
Entity types follow human reasoning patterns:
- tg:Question - the user's query (shared with GraphRAG)
- tg:Analysis - each think/act/observe cycle
- tg:Conclusion - the final answer
Also adds explicit TG types to GraphRAG entities:
- tg:Question, tg:Exploration, tg:Focus, tg:Synthesis
All types retain their PROV-O base types (prov:Activity, prov:Entity).
Changes:
- Add session_id and collection fields to AgentRequest schema
- Add explainability entity types to namespaces.py
- Create agent provenance triple generators
- Register explainability producer in agent service
- Emit provenance triples during agent execution
- Update CLI tools to detect and render both trace types
* Document RAG explainability is now complete. Here's a summary of the
changes made:
Schema Changes:
- trustgraph-base/trustgraph/schema/services/retrieval.py: Added
explain_id and explain_graph fields to DocumentRagResponse
- trustgraph-base/trustgraph/messaging/translators/retrieval.py:
Updated translator to handle explainability fields
Provenance Changes:
- trustgraph-base/trustgraph/provenance/namespaces.py: Added
TG_CHUNK_COUNT and TG_SELECTED_CHUNK predicates
- trustgraph-base/trustgraph/provenance/uris.py: Added
docrag_question_uri, docrag_exploration_uri, docrag_synthesis_uri
generators
- trustgraph-base/trustgraph/provenance/triples.py: Added
docrag_question_triples, docrag_exploration_triples,
docrag_synthesis_triples builders
- trustgraph-base/trustgraph/provenance/__init__.py: Exported all
new Document RAG functions and predicates
Service Changes:
- trustgraph-flow/trustgraph/retrieval/document_rag/document_rag.py:
Added explainability callback support and triple emission at each
phase (Question → Exploration → Synthesis)
- trustgraph-flow/trustgraph/retrieval/document_rag/rag.py:
Registered explainability producer and wired up the callback
Documentation:
- docs/tech-specs/agent-explainability.md: Added Document RAG entity
types and provenance model documentation
Document RAG Provenance Model:
Question (urn:trustgraph:docrag:{uuid})
│
│ tg:query, prov:startedAtTime
│ rdf:type = prov:Activity, tg:Question
│
↓ prov:wasGeneratedBy
│
Exploration (urn:trustgraph:docrag:{uuid}/exploration)
│
│ tg:chunkCount, tg:selectedChunk (multiple)
│ rdf:type = prov:Entity, tg:Exploration
│
↓ prov:wasDerivedFrom
│
Synthesis (urn:trustgraph:docrag:{uuid}/synthesis)
│
│ tg:content = "The answer..."
│ rdf:type = prov:Entity, tg:Synthesis
* Specific subtype that makes the retrieval mechanism immediately
obvious:
System: GraphRAG
TG Types on Question: tg:Question, tg:GraphRagQuestion
URI Pattern: urn:trustgraph:question:{uuid}
────────────────────────────────────────
System: Document RAG
TG Types on Question: tg:Question, tg:DocRagQuestion
URI Pattern: urn:trustgraph:docrag:{uuid}
────────────────────────────────────────
System: Agent
TG Types on Question: tg:Question, tg:AgentQuestion
URI Pattern: urn:trustgraph:agent:{uuid}
Files modified:
- trustgraph-base/trustgraph/provenance/namespaces.py - Added
TG_GRAPH_RAG_QUESTION, TG_DOC_RAG_QUESTION, TG_AGENT_QUESTION
- trustgraph-base/trustgraph/provenance/triples.py - Added subtype to
question_triples and docrag_question_triples
- trustgraph-base/trustgraph/provenance/agent.py - Added subtype to
agent_session_triples
- trustgraph-base/trustgraph/provenance/__init__.py - Exported new types
- docs/tech-specs/agent-explainability.md - Documented the subtypes
This allows:
- Query all questions: ?q rdf:type tg:Question
- Query only GraphRAG: ?q rdf:type tg:GraphRagQuestion
- Query only Document RAG: ?q rdf:type tg:DocRagQuestion
- Query only Agent: ?q rdf:type tg:AgentQuestion
* Fixed tests
|
||
|---|---|---|
| .github/workflows | ||
| containers | ||
| docs | ||
| specs | ||
| test-api | ||
| tests | ||
| tests.manual | ||
| trustgraph | ||
| trustgraph-base | ||
| trustgraph-bedrock | ||
| trustgraph-cli | ||
| trustgraph-embeddings-hf | ||
| trustgraph-flow | ||
| trustgraph-mcp | ||
| trustgraph-ocr | ||
| trustgraph-vertexai | ||
| .coveragerc | ||
| .gitignore | ||
| check_imports.py | ||
| context7.json | ||
| DEVELOPER_GUIDE.md | ||
| install_packages.sh | ||
| LICENSE | ||
| Makefile | ||
| ontology-prompt.md | ||
| product-platform-diagram.svg | ||
| prompt.txt | ||
| README.md | ||
| requirements.txt | ||
| run_tests.sh | ||
| schema.ttl | ||
| TEST_CASES.md | ||
| TEST_SETUP.md | ||
| TEST_STRATEGY.md | ||
| TESTS.md | ||
| TG-fullname-logo.svg | ||
| TG-hero-diagram.svg | ||
Durable agent memory you can trust. Build, version, and retrieve grounded context from a context graph.
- Give agents memory that persists across sessions and deployments.
- Reduce hallucinations with grounded context retrieval
- Ship reusable, portable Context Cores (packaged context you can move between projects/environments).
The context backend:
- Multi-model and multimodal database system
- Tabular/relational, key-value
- Document, graph, and vectors
- Images, video, and audio
- Automated data ingest and loading
- Quick ingest with semantic similarity retrieval
- Ontology structuring for precision retrieval
- Out-of-the-box RAG pipelines
- DocumentRAG
- GraphRAG
- OntologyRAG
- 3D GraphViz for exploring context
- Fully Agentic System
- Single Agent
- Multi Agent
- MCP integration
- Run anywhere
- Deploy locally with Docker
- Deploy in cloud with Kubernetes
- Support for all major LLMs
- API support for Anthropic, Cohere, Gemini, Mistral, OpenAI, and others
- Model inferencing with vLLM, Ollama, TGI, LM Studio, and Llamafiles
- Developer friendly
Quickstart
npx @trustgraph/config
TrustGraph downloads as Docker containers and can be run locally with Docker, Podman, or Minikube. The config tool will generate:
deploy.zipwith either adocker-compose.yamlfile for a Docker/Podman deploy orresources.yamlfor Kubernetes- Deployment instructions as
INSTALLATION.md
Table of Contents
What is a Context Graph?
Why TrustGraph?
Getting Started with TrustGraph
Watch TrustGraph 101
Workbench
The Workbench provides tools for all major features of TrustGraph. The Workbench is on port 8888 by default.
- Vector Search: Search the installed knowledge bases
- Agentic, GraphRAG and LLM Chat: Chat interface for agents, GraphRAG queries, or direct to LLMs
- Relationships: Analyze deep relationships in the installed knowledge bases
- Graph Visualizer: 3D GraphViz of the installed knowledge bases
- Library: Staging area for installing knowledge bases
- Flow Classes: Workflow preset configurations
- Flows: Create custom workflows and adjust LLM parameters during runtime
- Knowledge Cores: Manage resuable knowledge bases
- Prompts: Manage and adjust prompts during runtime
- Schemas: Define custom schemas for structured data knowledge bases
- Ontologies: Define custom ontologies for unstructured data knowledge bases
- Agent Tools: Define tools with collections, knowledge cores, MCP connections, and tool groups
- MCP Tools: Connect to MCP servers
TypeScript Library for UIs
There are 3 libraries for quick UI integration of TrustGraph services.
Context Cores
A Context Core is a portable, versioned bundle of context that you can ship between projects and environments, pin in production, and reuse across agents. It packages the “stuff agents need to know” (structured knowledge + embeddings + evidence + policies) into a single artifact, so you can treat context like code: build it, test it, version it, promote it, and roll it back. TrustGraph is built to support this kind of end-to-end context engineering and orchestration workflow.
What’s inside a Context Core
A Context Core typically includes:
- Ontology (your domain schema) and mappings
- Context Graph (entities, relationships, supporting evidence)
- Embeddings / vector indexes for fast semantic entry-point lookup
- Source manifests + provenance (where facts came from, when, and how they were derived)
- Retrieval policies (traversal rules, freshness, authority ranking)
Tech Stack
TrustGraph provides component flexibility to optimize agent workflows.
LLM APIs
- Anthropic
- AWS Bedrock
- AzureAI
- AzureOpenAI
- Cohere
- Google AI Studio
- Google VertexAI
- Mistral
- OpenAI
LLM Orchestration
- LM Studio
- Llamafiles
- Ollama
- TGI
- vLLM
Graph Storage
- Apache Cassandra (default)
- Neo4j
- Memgraph
- FalkorDB
VectorDBs
- Qdrant (default)
- Pinecone
- Milvus
File and Object Storage
- Garage (default)
- MinIO
Observability
- Prometheus
- Grafana
Data Streaming
- Apache Pulsar
Clouds
- AWS
- Azure
- Google Cloud
- OVHcloud
- Scaleway
Observability & Telemetry
Once the platform is running, access the Grafana dashboard at:
http://localhost:3000
Default credentials are:
user: admin
password: admin
The default Grafana dashboard tracks the following:
Telemetry
- LLM Latency
- Error Rate
- Service Request Rates
- Queue Backlogs
- Chunking Histogram
- Error Source by Service
- Rate Limit Events
- CPU usage by Service
- Memory usage by Service
- Models Deployed
- Token Throughput (Tokens/second)
- Cost Throughput (Cost/second)
Contributing
License
TrustGraph is licensed under Apache 2.0.
Copyright 2024-2025 TrustGraph
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.


