
[](https://pypi.org/project/trustgraph/) 
[](https://discord.gg/sQMwkRz5GX) [](https://deepwiki.com/trustgraph-ai/trustgraph)
[**Website**](https://trustgraph.ai) | [**Docs**](https://docs.trustgraph.ai) | [**YouTube**](https://www.youtube.com/@TrustGraphAI?sub_confirmation=1) | [**Configuration Terminal**](https://config-ui.demo.trustgraph.ai/) | [**Discord**](https://discord.gg/sQMwkRz5GX) | [**Blog**](https://blog.trustgraph.ai/subscribe)

# The context backend for reliable AI
LLMs alone hallucinate and diverge from ground truth. [TrustGraph](https://trustgraph.ai) is a context system that stores, enriches, and delivers context to LLMs to enable reliable AI agents. Think like [Supabase](https://github.com/supabase/supabase) but AI-native and powered by context graphs.
The context backend:
- [x] Multi-model and multimodal database system
- [x] Tabular/relational, key-value
- [x] Document, graph, and vectors
- [x] Images, video, and audio
- [x] Automated data ingest and loading
- [x] Quick ingest with semantic similarity retrieval
- [x] Ontology structuring for precision retrieval
- [x] Out-of-the-box RAG pipelines
- [x] DocumentRAG
- [x] GraphRAG
- [x] OntologyRAG
- [x] 3D GraphViz for exploring context
- [x] Fully Agentic System
- [x] Single Agent
- [x] Multi Agent
- [x] MCP integration
- [x] Run anywhere
- [x] Deploy locally with Docker
- [x] Deploy in cloud with Kubernetes
- [x] Support for all major LLMs
- [x] API support for Anthropic, Cohere, Gemini, Mistral, OpenAI, and others
- [x] Model inferencing with vLLM, Ollama, TGI, LM Studio, and Llamafiles
- [x] Developer friendly
- [x] REST API [Docs](https://docs.trustgraph.ai/reference/apis/rest.html)
- [x] Websocket API [Docs](https://docs.trustgraph.ai/reference/apis/websocket.html)
- [x] Python API [Docs](https://docs.trustgraph.ai/reference/apis/python)
- [x] CLI [Docs](https://docs.trustgraph.ai/reference/cli/)
## No API Keys Required
How many times have you cloned a repo and opened the `.env.example` to see the dozens of API keys for 3rd party dependencies needed to make the services work? There are only 3 things in TrustGraph that might need an API key:
- 3rd party LLM services like Anthropic, Cohere, Gemini, Mistral, OpenAI, etc.
- 3rd party OCR like Mistral OCR
- The API key *you set* for the TrustGraph API gateway
Everything else is included.
- [x] Managed Multi-model storage in [Cassandra](https://cassandra.apache.org/_/index.html)
- [x] Managed Vector embedding storage in [Qdrant](https://github.com/qdrant/qdrant)
- [x] Managed File and Object storage in [Garage](https://github.com/deuxfleurs-org/garage) (S3 compatible)
- [x] Managed High-speed Pub/Sub messaging fabric with [Pulsar](https://github.com/apache/pulsar)
- [x] Complete LLM inferencing stack for open LLMs with [vLLM](https://github.com/vllm-project/vllm), [TGI](https://github.com/huggingface/text-generation-inference), [Ollama](https://github.com/ollama/ollama), [LM Studio](https://github.com/lmstudio-ai), and [Llamafiles](https://github.com/mozilla-ai/llamafile)
## Quickstart
```
npx @trustgraph/config
```
TrustGraph downloads as Docker containers and can be run locally with Docker, Podman, or Minikube. The config tool will generate:
- `deploy.zip` with either a `docker-compose.yaml` file for a Docker/Podman deploy or `resources.yaml` for Kubernetes
- Deployment instructions as `INSTALLATION.md`