trustgraph/TEST_STRATEGY.md

244 lines
6.6 KiB
Markdown
Raw Normal View History

Release/v1.2 (#457) * Bump setup.py versions for 1.1 * PoC MCP server (#419) * Very initial MCP server PoC for TrustGraph * Put service on port 8000 * Add MCP container and packages to buildout * Update docs for API/CLI changes in 1.0 (#421) * Update some API basics for the 0.23/1.0 API change * Add MCP container push (#425) * Add command args to the MCP server (#426) * Host and port parameters * Added websocket arg * More docs * MCP client support (#427) - MCP client service - Tool request/response schema - API gateway support for mcp-tool - Message translation for tool request & response - Make mcp-tool using configuration service for information about where the MCP services are. * Feature/react call mcp (#428) Key Features - MCP Tool Integration: Added core MCP tool support with ToolClientSpec and ToolClient classes - API Enhancement: New mcp_tool method for flow-specific tool invocation - CLI Tooling: New tg-invoke-mcp-tool command for testing MCP integration - React Agent Enhancement: Fixed and improved multi-tool invocation capabilities - Tool Management: Enhanced CLI for tool configuration and management Changes - Added MCP tool invocation to API with flow-specific integration - Implemented ToolClientSpec and ToolClient for tool call handling - Updated agent-manager-react to invoke MCP tools with configurable types - Enhanced CLI with new commands and improved help text - Added comprehensive documentation for new CLI commands - Improved tool configuration management Testing - Added tg-invoke-mcp-tool CLI command for isolated MCP integration testing - Enhanced agent capability to invoke multiple tools simultaneously * Test suite executed from CI pipeline (#433) * Test strategy & test cases * Unit tests * Integration tests * Extending test coverage (#434) * Contract tests * Testing embeedings * Agent unit tests * Knowledge pipeline tests * Turn on contract tests * Increase storage test coverage (#435) * Fixing storage and adding tests * PR pipeline only runs quick tests * Empty configuration is returned as empty list, previously was not in response (#436) * Update config util to take files as well as command-line text (#437) * Updated CLI invocation and config model for tools and mcp (#438) * Updated CLI invocation and config model for tools and mcp * CLI anomalies * Tweaked the MCP tool implementation for new model * Update agent implementation to match the new model * Fix agent tools, now all tested * Fixed integration tests * Fix MCP delete tool params * Update Python deps to 1.2 * Update to enable knowledge extraction using the agent framework (#439) * Implement KG extraction agent (kg-extract-agent) * Using ReAct framework (agent-manager-react) * ReAct manager had an issue when emitting JSON, which conflicts which ReAct manager's own JSON messages, so refactored ReAct manager to use traditional ReAct messages, non-JSON structure. * Minor refactor to take the prompt template client out of prompt-template so it can be more readily used by other modules. kg-extract-agent uses this framework. * Migrate from setup.py to pyproject.toml (#440) * Converted setup.py to pyproject.toml * Modern package infrastructure as recommended by py docs * Install missing build deps (#441) * Install missing build deps (#442) * Implement logging strategy (#444) * Logging strategy and convert all prints() to logging invocations * Fix/startup failure (#445) * Fix loggin startup problems * Fix logging startup problems (#446) * Fix logging startup problems (#447) * Fixed Mistral OCR to use current API (#448) * Fixed Mistral OCR to use current API * Added PDF decoder tests * Fix Mistral OCR ident to be standard pdf-decoder (#450) * Fix Mistral OCR ident to be standard pdf-decoder * Correct test * Schema structure refactor (#451) * Write schema refactor spec * Implemented schema refactor spec * Structure data mvp (#452) * Structured data tech spec * Architecture principles * New schemas * Updated schemas and specs * Object extractor * Add .coveragerc * New tests * Cassandra object storage * Trying to object extraction working, issues exist * Validate librarian collection (#453) * Fix token chunker, broken API invocation (#454) * Fix token chunker, broken API invocation (#455) * Knowledge load utility CLI (#456) * Knowledge loader * More tests
2025-08-18 20:56:09 +01:00
# Unit Testing Strategy for TrustGraph Microservices
## Overview
This document outlines the unit testing strategy for the TrustGraph microservices architecture. The approach focuses on testing business logic while mocking external infrastructure to ensure fast, reliable, and maintainable tests.
## 1. Test Framework: pytest + pytest-asyncio
- **pytest**: Standard Python testing framework with excellent fixture support
- **pytest-asyncio**: Essential for testing async processors
- **pytest-mock**: Built-in mocking capabilities
## 2. Core Testing Patterns
### Service Layer Testing
```python
@pytest.mark.asyncio
async def test_text_completion_service():
# Test the core business logic, not external APIs
processor = TextCompletionProcessor(model="test-model")
# Mock external dependencies
with patch('processor.llm_client') as mock_client:
mock_client.generate.return_value = "test response"
result = await processor.process_message(test_message)
assert result.content == "test response"
```
### Message Processing Testing
```python
@pytest.fixture
def mock_pulsar_consumer():
return AsyncMock(spec=pulsar.Consumer)
@pytest.fixture
def mock_pulsar_producer():
return AsyncMock(spec=pulsar.Producer)
async def test_message_flow(mock_consumer, mock_producer):
# Test message handling without actual Pulsar
processor = FlowProcessor(consumer=mock_consumer, producer=mock_producer)
# Test message processing logic
```
## 3. Mock Strategy
### Mock External Services (Not Infrastructure)
-**Mock**: LLM APIs, Vector DBs, Graph DBs
-**Don't Mock**: Core business logic, data transformations
-**Mock**: Pulsar clients (infrastructure)
-**Don't Mock**: Message validation, processing logic
### Dependency Injection Pattern
```python
class TextCompletionProcessor:
def __init__(self, llm_client=None, **kwargs):
self.llm_client = llm_client or create_default_client()
# In tests
processor = TextCompletionProcessor(llm_client=mock_client)
```
## 4. Test Categories
### Unit Tests (70%)
- Individual service business logic
- Message processing functions
- Data transformation logic
- Configuration parsing
- Error handling
### Integration Tests (20%)
- Service-to-service communication patterns
- Database operations with test containers
- End-to-end message flows
### Contract Tests (10%)
- Pulsar message schemas
- API response formats
- Service interface contracts
## 5. Test Structure
```
tests/
├── unit/
│ ├── test_text_completion/
│ ├── test_embeddings/
│ ├── test_storage/
│ └── test_utils/
├── integration/
│ ├── test_flows/
│ └── test_databases/
├── fixtures/
│ ├── messages.py
│ ├── configs.py
│ └── mocks.py
└── conftest.py
```
## 6. Key Testing Tools
- **testcontainers**: For database integration tests
- **responses**: Mock HTTP APIs
- **freezegun**: Time-based testing
- **factory-boy**: Test data generation
## 7. Service-Specific Testing Approaches
### Text Completion Services
- Mock LLM provider APIs (OpenAI, Claude, Ollama)
- Test prompt construction and response parsing
- Verify rate limiting and error handling
- Test token counting and metrics collection
### Embeddings Services
- Mock embedding providers (FastEmbed, Ollama)
- Test vector dimension consistency
- Verify batch processing logic
- Test embedding storage operations
### Storage Services
- Use testcontainers for database integration tests
- Mock database clients for unit tests
- Test query construction and result parsing
- Verify data persistence and retrieval logic
### Query Services
- Mock vector similarity search operations
- Test graph traversal logic
- Verify result ranking and filtering
- Test query optimization
## 8. Best Practices
### Test Isolation
- Each test should be independent
- Use fixtures for common setup
- Clean up resources after tests
- Avoid test order dependencies
### Async Testing
- Use `@pytest.mark.asyncio` for async tests
- Mock async dependencies properly
- Test concurrent operations
- Handle timeout scenarios
### Error Handling
- Test both success and failure scenarios
- Verify proper exception handling
- Test retry mechanisms
- Validate error response formats
### Configuration Testing
- Test different configuration scenarios
- Verify parameter validation
- Test environment variable handling
- Test configuration defaults
## 9. Example Test Implementation
```python
# tests/unit/test_text_completion/test_openai_processor.py
import pytest
from unittest.mock import AsyncMock, patch
from trustgraph.model.text_completion.openai import Processor
@pytest.fixture
def mock_openai_client():
return AsyncMock()
@pytest.fixture
def processor(mock_openai_client):
return Processor(client=mock_openai_client, model="gpt-4")
@pytest.mark.asyncio
async def test_process_message_success(processor, mock_openai_client):
# Arrange
mock_openai_client.chat.completions.create.return_value = AsyncMock(
choices=[AsyncMock(message=AsyncMock(content="Test response"))]
)
message = {
"id": "test-id",
"prompt": "Test prompt",
"temperature": 0.7
}
# Act
result = await processor.process_message(message)
# Assert
assert result.content == "Test response"
mock_openai_client.chat.completions.create.assert_called_once()
@pytest.mark.asyncio
async def test_process_message_rate_limit(processor, mock_openai_client):
# Arrange
mock_openai_client.chat.completions.create.side_effect = RateLimitError("Rate limited")
message = {"id": "test-id", "prompt": "Test prompt"}
# Act & Assert
with pytest.raises(RateLimitError):
await processor.process_message(message)
```
## 10. Running Tests
```bash
# Run all tests
pytest
# Run unit tests only
pytest tests/unit/
# Run with coverage
pytest --cov=trustgraph --cov-report=html
# Run async tests
pytest -v tests/unit/test_text_completion/
# Run specific test file
pytest tests/unit/test_text_completion/test_openai_processor.py
```
## 11. Continuous Integration
- Run tests on every commit
- Enforce minimum code coverage (80%+)
- Run tests against multiple Python versions
- Include integration tests in CI pipeline
- Generate test reports and coverage metrics
## Conclusion
This testing strategy ensures that TrustGraph microservices are thoroughly tested without relying on external infrastructure. By focusing on business logic and mocking external dependencies, we achieve fast, reliable tests that provide confidence in code quality while maintaining development velocity.