The context development platform. Store, enrich, and retrieve structured knowledge with graph-native infrastructure, semantic retrieval, and portable context cores. https://trustgraph.ai
Find a file
cybermaggedon 24f0190ce7
RabbitMQ pub/sub backend with topic exchange architecture (#752)
Adds a RabbitMQ backend as an alternative to Pulsar, selectable via
PUBSUB_BACKEND=rabbitmq. Both backends implement the same PubSubBackend
protocol — no application code changes needed to switch.

RabbitMQ topology:
- Single topic exchange per topicspace (e.g. 'tg')
- Routing key derived from queue class and topic name
- Shared consumers: named queue bound to exchange (competing, round-robin)
- Exclusive consumers: anonymous auto-delete queue (broadcast, each gets
  every message). Used by Subscriber and config push consumer.
- Thread-local producer connections (pika is not thread-safe)
- Push-based consumption via basic_consume with process_data_events
  for heartbeat processing

Consumer model changes:
- Consumer class creates one backend consumer per concurrent task
  (required for pika thread safety, harmless for Pulsar)
- Consumer class accepts consumer_type parameter
- Subscriber passes consumer_type='exclusive' for broadcast semantics
- Config push consumer uses consumer_type='exclusive' so every
  processor instance receives config updates
- handle_one_from_queue receives consumer as parameter for correct
  per-connection ack/nack

LibrarianClient:
- New shared client class replacing duplicated librarian request-response
  code across 6+ services (chunking, decoders, RAG, etc.)
- Uses stream-document instead of get-document-content for fetching
  document content in 1MB chunks (avoids broker message size limits)
- Standalone object (self.librarian = LibrarianClient(...)) not a mixin
- get-document-content marked deprecated in schema and OpenAPI spec

Serialisation:
- Extracted dataclass_to_dict/dict_to_dataclass to shared
  serialization.py (used by both Pulsar and RabbitMQ backends)

Librarian queues:
- Changed from flow class (persistent) back to request/response class
  now that stream-document eliminates large single messages
- API upload chunk size reduced from 5MB to 3MB to stay under broker
  limits after base64 encoding

Factory and CLI:
- get_pubsub() handles 'rabbitmq' backend with RabbitMQ connection params
- add_pubsub_args() includes RabbitMQ options (host, port, credentials)
- add_pubsub_args(standalone=True) defaults to localhost for CLI tools
- init_trustgraph skips Pulsar admin setup for non-Pulsar backends
- tg-dump-queues and tg-monitor-prompts use backend abstraction
- BaseClient and ConfigClient accept generic pubsub config
2026-04-02 12:47:16 +01:00
.github/workflows New CLA workflow: Uses a github action in 2026-03-26 14:09:07 +00:00
containers Add missing pdf extra to unstructured dependency (#728) 2026-03-29 20:22:45 +01:00
dev-tools/tests Additional agent DAG tests (#750) 2026-04-01 13:59:58 +01:00
docs Pub/sub abstraction: decouple from Pulsar (#751) 2026-04-01 20:16:53 +01:00
specs RabbitMQ pub/sub backend with topic exchange architecture (#752) 2026-04-02 12:47:16 +01:00
test-api Knowledge core CLI (#368) 2025-05-07 00:20:59 +01:00
tests RabbitMQ pub/sub backend with topic exchange architecture (#752) 2026-04-02 12:47:16 +01:00
tests.manual Test suite executed from CI pipeline (#433) 2025-07-14 14:57:44 +01:00
trustgraph Start 1.8 release branch 2025-12-17 21:32:13 +00:00
trustgraph-base RabbitMQ pub/sub backend with topic exchange architecture (#752) 2026-04-02 12:47:16 +01:00
trustgraph-bedrock Prepare 2.2 release branch (#704) 2026-03-22 15:23:23 +00:00
trustgraph-cli RabbitMQ pub/sub backend with topic exchange architecture (#752) 2026-04-02 12:47:16 +01:00
trustgraph-embeddings-hf Prepare 2.2 release branch (#704) 2026-03-22 15:23:23 +00:00
trustgraph-flow RabbitMQ pub/sub backend with topic exchange architecture (#752) 2026-04-02 12:47:16 +01:00
trustgraph-mcp Add GATEWAY_SECRET support for MCP server to API gateway auth (#721) 2026-03-26 10:49:28 +00:00
trustgraph-ocr RabbitMQ pub/sub backend with topic exchange architecture (#752) 2026-04-02 12:47:16 +01:00
trustgraph-unstructured RabbitMQ pub/sub backend with topic exchange architecture (#752) 2026-04-02 12:47:16 +01:00
trustgraph-vertexai Prepare 2.2 release branch (#704) 2026-03-22 15:23:23 +00:00
.coveragerc Structure data mvp (#452) 2025-08-07 20:47:20 +01:00
.gitignore Add universal document decoder with multi-format support (#705) 2026-03-23 12:56:35 +00:00
check_imports.py Test suite executed from CI pipeline (#433) 2025-07-14 14:57:44 +01:00
context7.json Merge master into release/v2.1 (#652) 2026-02-28 11:07:03 +00:00
DEVELOPER_GUIDE.md Headings 2024-10-03 17:46:09 +01:00
install_packages.sh Test suite executed from CI pipeline (#433) 2025-07-14 14:57:44 +01:00
LICENSE Apache 2 (#373) 2025-05-08 18:59:58 +01:00
Makefile RabbitMQ pub/sub backend with topic exchange architecture (#752) 2026-04-02 12:47:16 +01:00
ontology-prompt.md Feature/improve ontology extract (#576) 2025-12-03 13:36:10 +00:00
product-platform-diagram.svg master -> 1.5 (README updates) (#552) 2025-10-11 11:46:03 +01:00
prompt.txt Structured data loader cli (#498) 2025-09-05 15:38:18 +01:00
README.md master -> release/v2.2 (#732) 2026-03-29 20:26:26 +01:00
requirements.txt Loki logging (#586) 2025-12-09 23:24:41 +00:00
run_tests.sh Test suite executed from CI pipeline (#433) 2025-07-14 14:57:44 +01:00
schema.ttl Feature/doc metadata labels (#130) 2024-10-29 21:18:02 +00:00
SECURITY.md master -> release/v2.2 (#732) 2026-03-29 20:26:26 +01:00
TEST_CASES.md Test suite executed from CI pipeline (#433) 2025-07-14 14:57:44 +01:00
TEST_SETUP.md Test suite executed from CI pipeline (#433) 2025-07-14 14:57:44 +01:00
TEST_STRATEGY.md Test suite executed from CI pipeline (#433) 2025-07-14 14:57:44 +01:00
TESTS.md Test suite executed from CI pipeline (#433) 2025-07-14 14:57:44 +01:00
TG-fullname-logo.svg Reconcile master with 1.6 (#563) 2025-11-24 10:02:30 +00:00
TG-hero-diagram.svg Reconcile master with 1.6 (#563) 2025-11-24 10:02:30 +00:00

PyPI version License E2E Tests Discord Ask DeepWiki

Website | Docs | YouTube | Configuration Terminal | Discord | Blog

trustgraph-ai%2Ftrustgraph | Trendshift

The context development platform

Building applications that need to know things requires more than a database. TrustGraph is the context development platform: graph-native infrastructure for storing, enriching, and retrieving structured knowledge at any scale. Think like Supabase but built around context graphs: multi-model storage, semantic retrieval pipelines, portable context cores, and a full developer toolkit out of the box. Deploy locally or in the cloud. No unnecessary API keys. Just context, engineered.

The platform:

  • Multi-model and multimodal database system
    • Tabular/relational, key-value
    • Document, graph, and vectors
    • Images, video, and audio
  • Automated data ingest and loading
    • Quick ingest with semantic similarity retrieval
    • Ontology structuring for precision retrieval
  • Out-of-the-box RAG pipelines
    • DocumentRAG
    • GraphRAG
    • OntologyRAG
  • 3D GraphViz for exploring context
  • Fully Agentic System
    • Single Agent
    • Multi Agent
    • MCP integration
  • Run anywhere
    • Deploy locally with Docker
    • Deploy in cloud with Kubernetes
  • Support for all major LLMs
    • API support for Anthropic, Cohere, Gemini, Mistral, OpenAI, and others
    • Model inferencing with vLLM, Ollama, TGI, LM Studio, and Llamafiles
  • Developer friendly

No API Keys Required

How many times have you cloned a repo and opened the .env.example to see the dozens of API keys for 3rd party dependencies needed to make the services work? There are only 3 things in TrustGraph that might need an API key:

  • 3rd party LLM services like Anthropic, Cohere, Gemini, Mistral, OpenAI, etc.
  • 3rd party OCR like Mistral OCR
  • The API key you set for the TrustGraph API gateway

Everything else is included.

Quickstart

npx @trustgraph/config

TrustGraph downloads as Docker containers and can be run locally with Docker, Podman, or Minikube. The config tool will generate:

  • deploy.zip with either a docker-compose.yaml file for a Docker/Podman deploy or resources.yaml for Kubernetes
  • Deployment instructions as INSTALLATION.md

For a browser based quickstart, try the Configuration Terminal.

Table of Contents

Watch What is a Context Graph?

What is a Context Graph?

Watch Context Graphs in Action

Context Graphs in Action with TrustGraph

Getting Started with TrustGraph

Workbench

The Workbench provides tools for all major features of TrustGraph. The Workbench is on port 8888 by default.

  • Vector Search: Search the installed knowledge bases
  • Agentic, GraphRAG and LLM Chat: Chat interface for agents, GraphRAG queries, or direct to LLMs
  • Relationships: Analyze deep relationships in the installed knowledge bases
  • Graph Visualizer: 3D GraphViz of the installed knowledge bases
  • Library: Staging area for installing knowledge bases
  • Flow Classes: Workflow preset configurations
  • Flows: Create custom workflows and adjust LLM parameters during runtime
  • Knowledge Cores: Manage resuable knowledge bases
  • Prompts: Manage and adjust prompts during runtime
  • Schemas: Define custom schemas for structured data knowledge bases
  • Ontologies: Define custom ontologies for unstructured data knowledge bases
  • Agent Tools: Define tools with collections, knowledge cores, MCP connections, and tool groups
  • MCP Tools: Connect to MCP servers

TypeScript Library for UIs

There are 3 libraries for quick UI integration of TrustGraph services.

Context Cores

A Context Core is a portable, versioned bundle of context that you can ship between projects and environments, pin in production, and reuse across agents. It packages the “stuff agents need to know” (structured knowledge + embeddings + evidence + policies) into a single artifact, so you can treat context like code: build it, test it, version it, promote it, and roll it back. TrustGraph is built to support this kind of end-to-end context engineering and orchestration workflow.

Whats inside a Context Core

A Context Core typically includes:

  • Ontology (your domain schema) and mappings
  • Context Graph (entities, relationships, supporting evidence)
  • Embeddings / vector indexes for fast semantic entry-point lookup
  • Source manifests + provenance (where facts came from, when, and how they were derived)
  • Retrieval policies (traversal rules, freshness, authority ranking)

Tech Stack

TrustGraph provides component flexibility to optimize agent workflows.

LLM APIs
  • Anthropic
  • AWS Bedrock
  • AzureAI
  • AzureOpenAI
  • Cohere
  • Google AI Studio
  • Google VertexAI
  • Mistral
  • OpenAI
LLM Orchestration
  • LM Studio
  • Llamafiles
  • Ollama
  • TGI
  • vLLM
Multi-model storage
  • Apache Cassandra
VectorDB
  • Qdrant
File and Object Storage
  • Garage
Observability
  • Prometheus
  • Grafana
  • Loki
Data Streaming
  • Apache Pulsar
Clouds
  • AWS
  • Azure
  • Google Cloud
  • OVHcloud
  • Scaleway

Observability & Telemetry

Once the platform is running, access the Grafana dashboard at:

http://localhost:3000

Default credentials are:

user: admin
password: admin

The default Grafana dashboard tracks the following:

Telemetry
  • LLM Latency
  • Error Rate
  • Service Request Rates
  • Queue Backlogs
  • Chunking Histogram
  • Error Source by Service
  • Rate Limit Events
  • CPU usage by Service
  • Memory usage by Service
  • Models Deployed
  • Token Throughput (Tokens/second)
  • Cost Throughput (Cost/second)

Contributing

Developer's Guide

License

TrustGraph is licensed under Apache 2.0.

Copyright 2024-2025 TrustGraph

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Support & Community

  • Bug Reports & Feature Requests: Discord
  • Discussions & Questions: Discord
  • Documentation: Docs