trustgraph/README.md
Jack Colquitt 103b3c31e5
Enhance README with quickstart and data streaming info
Added quickstart section and updated control plane summary.
2026-02-11 19:43:29 -08:00

9.1 KiB

PyPI version E2E Tests Discord Ask DeepWiki

Website | Docs | YouTube | Configuration Builder | Discord | Blog

trustgraph-ai%2Ftrustgraph | Trendshift

Graph powered context harness for AI agents

TrustGraph continuously builds a context graph creating a living context harness, so agents act with connected understanding — not isolated chunks.

Ingest your data into a unified context graph, enrich that graph with ontologies, and serve that graph as structured context to your agents and applications. Instead of letting LLMs guess from flat text, TrustGraph harnesses your graph so every response, tool call, and decision is driven by connected context.

TrustGraph gives you a graph-backed context harness with:

  • a context graph over your data (nodes, edges, embeddings)
  • graphs built to your ontologies and protocols
  • GraphRAG APIs for context-aware retrieval
  • an agent harness that lets LLMs query, traverse, and update the graph with isolated collections and modular context cores

Use it as the context layer under any model or agent framework with model inferencing for open models on Nvidia, AMD, or Intel hardware.

Quickstart

npx @trustgraph/config

Table of Contents

Key Features

  • Ontology-Driven Context Engineering
  • Unify Data Silos into a Single Context Graph
  • Automated Context Graph Construction and Retrieval
  • 3D GraphViz
  • Single Agent or Multi-Agent Systems
  • Interoperability with MCP
  • Run Anywhere from local to cloud
  • Observability and Telemetry
  • Serve Models for Private LLM Inference
  • Create Custom Workflows
  • Control Data Access for Users and Agents
  • Backend Orchestration for Context Graphs, Datastores, and File and Object Storage
  • High Throughput Data Streaming
  • Fully Containerized

What is a Context Graph?

What is a Context Graph?

Why TrustGraph?

Why TrustGraph?

Getting Started

Watch TrustGraph 101

TrustGraph 101

Configuration Builder

The Configuration Builder assembles all of the selected components and builds them into a deployable package. It has 4 sections:

  • Version: Select the version of TrustGraph you'd like to deploy
  • Component Selection: Choose from the available deployment platforms, LLMs, graph store, VectorDB, chunking algorithm, chunking parameters, and LLM parameters
  • Customization: Enable OCR pipelines and custom embeddings models
  • Finish Deployment: Download the launch YAML files with deployment instructions

Workbench

The Workbench provides tools for all major features of TrustGraph. The Workbench is on port 8888 by default.

  • Vector Search: Search the installed knowledge bases
  • Agentic, GraphRAG and LLM Chat: Chat interface for agents, GraphRAG queries, or direct to LLMs
  • Relationships: Analyze deep relationships in the installed knowledge bases
  • Graph Visualizer: 3D GraphViz of the installed knowledge bases
  • Library: Staging area for installing knowledge bases
  • Flow Classes: Workflow preset configurations
  • Flows: Create custom workflows and adjust LLM parameters during runtime
  • Knowledge Cores: Manage resuable knowledge bases
  • Prompts: Manage and adjust prompts during runtime
  • Schemas: Define custom schemas for structured data knowledge bases
  • Ontologies: Define custom ontologies for unstructured data knowledge bases
  • Agent Tools: Define tools with collections, knowledge cores, MCP connections, and tool groups
  • MCP Tools: Connect to MCP servers

TypeScript Library for UIs

There are 3 libraries for quick UI integration of TrustGraph services.

Context Cores

A challenge facing GraphRAG architectures is the ability to reuse and remove context from agent workflows. TrustGraph can build modular and reusable Context Cores. Context cores can be loaded and removed during runtime. Some sample context cores are here.

A Context Core has two components:

  • Context graph triples
  • Vector embeddings mapped to the context graph

Integrations

TrustGraph provides component flexibility to optimize agent workflows.

LLM APIs
  • Anthropic
  • AWS Bedrock
  • AzureAI
  • AzureOpenAI
  • Cohere
  • Google AI Studio
  • Google VertexAI
  • Mistral
  • OpenAI
LLM Orchestration
  • LM Studio
  • Llamafiles
  • Ollama
  • TGI
  • vLLM
Graph Storage
  • Apache Cassandra (default)
  • Neo4j
  • Memgraph
  • FalkorDB
VectorDBs
  • Qdrant (default)
  • Pinecone
  • Milvus
File and Object Storage
  • Garage (default)
  • MinIO
Observability
  • Prometheus
  • Grafana
Data Streaming
  • Apache Pulsar
Clouds
  • AWS
  • Azure
  • Google Cloud
  • OVHcloud
  • Scaleway

Observability & Telemetry

Once the platform is running, access the Grafana dashboard at:

http://localhost:3000

Default credentials are:

user: admin
password: admin

The default Grafana dashboard tracks the following:

Telemetry
  • LLM Latency
  • Error Rate
  • Service Request Rates
  • Queue Backlogs
  • Chunking Histogram
  • Error Source by Service
  • Rate Limit Events
  • CPU usage by Service
  • Memory usage by Service
  • Models Deployed
  • Token Throughput (Tokens/second)
  • Cost Throughput (Cost/second)

Contributing

Developer's Guide

License

TrustGraph is licensed under Apache 2.0.

Copyright 2024-2025 TrustGraph

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Support & Community

  • Bug Reports & Feature Requests: Discord
  • Discussions & Questions: Discord
  • Documentation: Docs