mirror of
https://github.com/trustgraph-ai/trustgraph.git
synced 2026-04-25 08:26:21 +02:00
* Bump setup.py versions for 1.1 * PoC MCP server (#419) * Very initial MCP server PoC for TrustGraph * Put service on port 8000 * Add MCP container and packages to buildout * Update docs for API/CLI changes in 1.0 (#421) * Update some API basics for the 0.23/1.0 API change * Add MCP container push (#425) * Add command args to the MCP server (#426) * Host and port parameters * Added websocket arg * More docs * MCP client support (#427) - MCP client service - Tool request/response schema - API gateway support for mcp-tool - Message translation for tool request & response - Make mcp-tool using configuration service for information about where the MCP services are. * Feature/react call mcp (#428) Key Features - MCP Tool Integration: Added core MCP tool support with ToolClientSpec and ToolClient classes - API Enhancement: New mcp_tool method for flow-specific tool invocation - CLI Tooling: New tg-invoke-mcp-tool command for testing MCP integration - React Agent Enhancement: Fixed and improved multi-tool invocation capabilities - Tool Management: Enhanced CLI for tool configuration and management Changes - Added MCP tool invocation to API with flow-specific integration - Implemented ToolClientSpec and ToolClient for tool call handling - Updated agent-manager-react to invoke MCP tools with configurable types - Enhanced CLI with new commands and improved help text - Added comprehensive documentation for new CLI commands - Improved tool configuration management Testing - Added tg-invoke-mcp-tool CLI command for isolated MCP integration testing - Enhanced agent capability to invoke multiple tools simultaneously * Test suite executed from CI pipeline (#433) * Test strategy & test cases * Unit tests * Integration tests * Extending test coverage (#434) * Contract tests * Testing embeedings * Agent unit tests * Knowledge pipeline tests * Turn on contract tests * Increase storage test coverage (#435) * Fixing storage and adding tests * PR pipeline only runs quick tests * Empty configuration is returned as empty list, previously was not in response (#436) * Update config util to take files as well as command-line text (#437) * Updated CLI invocation and config model for tools and mcp (#438) * Updated CLI invocation and config model for tools and mcp * CLI anomalies * Tweaked the MCP tool implementation for new model * Update agent implementation to match the new model * Fix agent tools, now all tested * Fixed integration tests * Fix MCP delete tool params * Update Python deps to 1.2 * Update to enable knowledge extraction using the agent framework (#439) * Implement KG extraction agent (kg-extract-agent) * Using ReAct framework (agent-manager-react) * ReAct manager had an issue when emitting JSON, which conflicts which ReAct manager's own JSON messages, so refactored ReAct manager to use traditional ReAct messages, non-JSON structure. * Minor refactor to take the prompt template client out of prompt-template so it can be more readily used by other modules. kg-extract-agent uses this framework. * Migrate from setup.py to pyproject.toml (#440) * Converted setup.py to pyproject.toml * Modern package infrastructure as recommended by py docs * Install missing build deps (#441) * Install missing build deps (#442) * Implement logging strategy (#444) * Logging strategy and convert all prints() to logging invocations * Fix/startup failure (#445) * Fix loggin startup problems * Fix logging startup problems (#446) * Fix logging startup problems (#447) * Fixed Mistral OCR to use current API (#448) * Fixed Mistral OCR to use current API * Added PDF decoder tests * Fix Mistral OCR ident to be standard pdf-decoder (#450) * Fix Mistral OCR ident to be standard pdf-decoder * Correct test * Schema structure refactor (#451) * Write schema refactor spec * Implemented schema refactor spec * Structure data mvp (#452) * Structured data tech spec * Architecture principles * New schemas * Updated schemas and specs * Object extractor * Add .coveragerc * New tests * Cassandra object storage * Trying to object extraction working, issues exist * Validate librarian collection (#453) * Fix token chunker, broken API invocation (#454) * Fix token chunker, broken API invocation (#455) * Knowledge load utility CLI (#456) * Knowledge loader * More tests
106 lines
5 KiB
Markdown
106 lines
5 KiB
Markdown
# Knowledge Graph Architecture Foundations
|
|
|
|
## Foundation 1: Subject-Predicate-Object (SPO) Graph Model
|
|
**Decision**: Adopt SPO/RDF as the core knowledge representation model
|
|
|
|
**Rationale**:
|
|
- Provides maximum flexibility and interoperability with existing graph technologies
|
|
- Enables seamless translation to other graph query languages (e.g., SPO → Cypher, but not vice versa)
|
|
- Creates a foundation that "unlocks a lot" of downstream capabilities
|
|
- Supports both node-to-node relationships (SPO) and node-to-literal relationships (RDF)
|
|
|
|
**Implementation**:
|
|
- Core data structure: `node → edge → {node | literal}`
|
|
- Maintain compatibility with RDF standards while supporting extended SPO operations
|
|
|
|
## Foundation 2: LLM-Native Knowledge Graph Integration
|
|
**Decision**: Optimize knowledge graph structure and operations for LLM interaction
|
|
|
|
**Rationale**:
|
|
- Primary use case involves LLMs interfacing with knowledge graphs
|
|
- Graph technology choices must prioritize LLM compatibility over other considerations
|
|
- Enables natural language processing workflows that leverage structured knowledge
|
|
|
|
**Implementation**:
|
|
- Design graph schemas that LLMs can effectively reason about
|
|
- Optimize for common LLM interaction patterns
|
|
|
|
## Foundation 3: Embedding-Based Graph Navigation
|
|
**Decision**: Implement direct mapping from natural language queries to graph nodes via embeddings
|
|
|
|
**Rationale**:
|
|
- Enables the simplest possible path from NLP query to graph navigation
|
|
- Avoids complex intermediate query generation steps
|
|
- Provides efficient semantic search capabilities within the graph structure
|
|
|
|
**Implementation**:
|
|
- `NLP Query → Graph Embeddings → Graph Nodes`
|
|
- Maintain embedding representations for all graph entities
|
|
- Support direct semantic similarity matching for query resolution
|
|
|
|
## Foundation 4: Distributed Entity Resolution with Deterministic Identifiers
|
|
**Decision**: Support parallel knowledge extraction with deterministic entity identification (80% rule)
|
|
|
|
**Rationale**:
|
|
- **Ideal**: Single-process extraction with complete state visibility enables perfect entity resolution
|
|
- **Reality**: Scalability requirements demand parallel processing capabilities
|
|
- **Compromise**: Design for deterministic entity identification across distributed processes
|
|
|
|
**Implementation**:
|
|
- Develop mechanisms for generating consistent, unique identifiers across different knowledge extractors
|
|
- Same entity mentioned in different processes must resolve to the same identifier
|
|
- Acknowledge that ~20% of edge cases may require alternative processing models
|
|
- Design fallback mechanisms for complex entity resolution scenarios
|
|
|
|
## Foundation 5: Event-Driven Architecture with Publish-Subscribe
|
|
**Decision**: Implement pub-sub messaging system for system coordination
|
|
|
|
**Rationale**:
|
|
- Enables loose coupling between knowledge extraction, storage, and query components
|
|
- Supports real-time updates and notifications across the system
|
|
- Facilitates scalable, distributed processing workflows
|
|
|
|
**Implementation**:
|
|
- Message-driven coordination between system components
|
|
- Event streams for knowledge updates, extraction completion, and query results
|
|
|
|
## Foundation 6: Reentrant Agent Communication
|
|
**Decision**: Support reentrant pub-sub operations for agent-based processing
|
|
|
|
**Rationale**:
|
|
- Enables sophisticated agent workflows where agents can trigger and respond to each other
|
|
- Supports complex, multi-step knowledge processing pipelines
|
|
- Allows for recursive and iterative processing patterns
|
|
|
|
**Implementation**:
|
|
- Pub-sub system must handle reentrant calls safely
|
|
- Agent coordination mechanisms that prevent infinite loops
|
|
- Support for agent workflow orchestration
|
|
|
|
## Foundation 7: Columnar Data Store Integration
|
|
**Decision**: Ensure query compatibility with columnar storage systems
|
|
|
|
**Rationale**:
|
|
- Enables efficient analytical queries over large knowledge datasets
|
|
- Supports business intelligence and reporting use cases
|
|
- Bridges graph-based knowledge representation with traditional analytical workflows
|
|
|
|
**Implementation**:
|
|
- Query translation layer: Graph queries → Columnar queries
|
|
- Hybrid storage strategy supporting both graph operations and analytical workloads
|
|
- Maintain query performance across both paradigms
|
|
|
|
---
|
|
|
|
## Architecture Principles Summary
|
|
|
|
1. **Flexibility First**: SPO/RDF model provides maximum adaptability
|
|
2. **LLM Optimization**: All design decisions consider LLM interaction requirements
|
|
3. **Semantic Efficiency**: Direct embedding-to-node mapping for optimal query performance
|
|
4. **Pragmatic Scalability**: Balance perfect accuracy with practical distributed processing
|
|
5. **Event-Driven Coordination**: Pub-sub enables loose coupling and scalability
|
|
6. **Agent-Friendly**: Support complex, multi-agent processing workflows
|
|
7. **Analytical Compatibility**: Bridge graph and columnar paradigms for comprehensive querying
|
|
|
|
These foundations establish a knowledge graph architecture that balances theoretical rigor with practical scalability requirements, optimized for LLM integration and distributed processing.
|
|
|