mirror of
https://github.com/trustgraph-ai/trustgraph.git
synced 2026-04-25 08:26:21 +02:00
* Bump setup.py versions for 1.1 * PoC MCP server (#419) * Very initial MCP server PoC for TrustGraph * Put service on port 8000 * Add MCP container and packages to buildout * Update docs for API/CLI changes in 1.0 (#421) * Update some API basics for the 0.23/1.0 API change * Add MCP container push (#425) * Add command args to the MCP server (#426) * Host and port parameters * Added websocket arg * More docs * MCP client support (#427) - MCP client service - Tool request/response schema - API gateway support for mcp-tool - Message translation for tool request & response - Make mcp-tool using configuration service for information about where the MCP services are. * Feature/react call mcp (#428) Key Features - MCP Tool Integration: Added core MCP tool support with ToolClientSpec and ToolClient classes - API Enhancement: New mcp_tool method for flow-specific tool invocation - CLI Tooling: New tg-invoke-mcp-tool command for testing MCP integration - React Agent Enhancement: Fixed and improved multi-tool invocation capabilities - Tool Management: Enhanced CLI for tool configuration and management Changes - Added MCP tool invocation to API with flow-specific integration - Implemented ToolClientSpec and ToolClient for tool call handling - Updated agent-manager-react to invoke MCP tools with configurable types - Enhanced CLI with new commands and improved help text - Added comprehensive documentation for new CLI commands - Improved tool configuration management Testing - Added tg-invoke-mcp-tool CLI command for isolated MCP integration testing - Enhanced agent capability to invoke multiple tools simultaneously * Test suite executed from CI pipeline (#433) * Test strategy & test cases * Unit tests * Integration tests * Extending test coverage (#434) * Contract tests * Testing embeedings * Agent unit tests * Knowledge pipeline tests * Turn on contract tests * Increase storage test coverage (#435) * Fixing storage and adding tests * PR pipeline only runs quick tests * Empty configuration is returned as empty list, previously was not in response (#436) * Update config util to take files as well as command-line text (#437) * Updated CLI invocation and config model for tools and mcp (#438) * Updated CLI invocation and config model for tools and mcp * CLI anomalies * Tweaked the MCP tool implementation for new model * Update agent implementation to match the new model * Fix agent tools, now all tested * Fixed integration tests * Fix MCP delete tool params * Update Python deps to 1.2 * Update to enable knowledge extraction using the agent framework (#439) * Implement KG extraction agent (kg-extract-agent) * Using ReAct framework (agent-manager-react) * ReAct manager had an issue when emitting JSON, which conflicts which ReAct manager's own JSON messages, so refactored ReAct manager to use traditional ReAct messages, non-JSON structure. * Minor refactor to take the prompt template client out of prompt-template so it can be more readily used by other modules. kg-extract-agent uses this framework. * Migrate from setup.py to pyproject.toml (#440) * Converted setup.py to pyproject.toml * Modern package infrastructure as recommended by py docs * Install missing build deps (#441) * Install missing build deps (#442) * Implement logging strategy (#444) * Logging strategy and convert all prints() to logging invocations * Fix/startup failure (#445) * Fix loggin startup problems * Fix logging startup problems (#446) * Fix logging startup problems (#447) * Fixed Mistral OCR to use current API (#448) * Fixed Mistral OCR to use current API * Added PDF decoder tests * Fix Mistral OCR ident to be standard pdf-decoder (#450) * Fix Mistral OCR ident to be standard pdf-decoder * Correct test * Schema structure refactor (#451) * Write schema refactor spec * Implemented schema refactor spec * Structure data mvp (#452) * Structured data tech spec * Architecture principles * New schemas * Updated schemas and specs * Object extractor * Add .coveragerc * New tests * Cassandra object storage * Trying to object extraction working, issues exist * Validate librarian collection (#453) * Fix token chunker, broken API invocation (#454) * Fix token chunker, broken API invocation (#455) * Knowledge load utility CLI (#456) * Knowledge loader * More tests
169 lines
No EOL
4.6 KiB
Markdown
169 lines
No EOL
4.6 KiB
Markdown
# TrustGraph Logging Strategy
|
|
|
|
## Overview
|
|
|
|
TrustGraph uses Python's built-in `logging` module for all logging operations. This provides a standardized, flexible approach to logging across all components of the system.
|
|
|
|
## Default Configuration
|
|
|
|
### Logging Level
|
|
- **Default Level**: `INFO`
|
|
- **Debug Mode**: `DEBUG` (enabled via command-line argument)
|
|
- **Production**: `WARNING` or `ERROR` as appropriate
|
|
|
|
### Output Destination
|
|
All logs should be written to **standard output (stdout)** to ensure compatibility with containerized environments and log aggregation systems.
|
|
|
|
## Implementation Guidelines
|
|
|
|
### 1. Logger Initialization
|
|
|
|
Each module should create its own logger using the module's `__name__`:
|
|
|
|
```python
|
|
import logging
|
|
|
|
logger = logging.getLogger(__name__)
|
|
```
|
|
|
|
### 2. Centralized Configuration
|
|
|
|
The logging configuration should be centralized in `async_processor.py` (or a dedicated logging configuration module) since it's inherited by much of the codebase:
|
|
|
|
```python
|
|
import logging
|
|
import argparse
|
|
|
|
def setup_logging(log_level='INFO'):
|
|
"""Configure logging for the entire application"""
|
|
logging.basicConfig(
|
|
level=getattr(logging, log_level.upper()),
|
|
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
|
handlers=[logging.StreamHandler()]
|
|
)
|
|
|
|
def parse_args():
|
|
parser = argparse.ArgumentParser()
|
|
parser.add_argument(
|
|
'--log-level',
|
|
default='INFO',
|
|
choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'],
|
|
help='Set the logging level (default: INFO)'
|
|
)
|
|
return parser.parse_args()
|
|
|
|
# In main execution
|
|
if __name__ == '__main__':
|
|
args = parse_args()
|
|
setup_logging(args.log_level)
|
|
```
|
|
|
|
### 3. Logging Best Practices
|
|
|
|
#### Log Levels Usage
|
|
- **DEBUG**: Detailed information for diagnosing problems (variable values, function entry/exit)
|
|
- **INFO**: General informational messages (service started, configuration loaded, processing milestones)
|
|
- **WARNING**: Warning messages for potentially harmful situations (deprecated features, recoverable errors)
|
|
- **ERROR**: Error messages for serious problems (failed operations, exceptions)
|
|
- **CRITICAL**: Critical messages for system failures requiring immediate attention
|
|
|
|
#### Message Format
|
|
```python
|
|
# Good - includes context
|
|
logger.info(f"Processing document: {doc_id}, size: {doc_size} bytes")
|
|
logger.error(f"Failed to connect to database: {error}", exc_info=True)
|
|
|
|
# Avoid - lacks context
|
|
logger.info("Processing document")
|
|
logger.error("Connection failed")
|
|
```
|
|
|
|
#### Performance Considerations
|
|
```python
|
|
# Use lazy formatting for expensive operations
|
|
logger.debug("Expensive operation result: %s", expensive_function())
|
|
|
|
# Check log level for very expensive debug operations
|
|
if logger.isEnabledFor(logging.DEBUG):
|
|
debug_data = compute_expensive_debug_info()
|
|
logger.debug(f"Debug data: {debug_data}")
|
|
```
|
|
|
|
### 4. Structured Logging
|
|
|
|
For complex data, use structured logging:
|
|
|
|
```python
|
|
logger.info("Request processed", extra={
|
|
'request_id': request_id,
|
|
'duration_ms': duration,
|
|
'status_code': status_code,
|
|
'user_id': user_id
|
|
})
|
|
```
|
|
|
|
### 5. Exception Logging
|
|
|
|
Always include stack traces for exceptions:
|
|
|
|
```python
|
|
try:
|
|
process_data()
|
|
except Exception as e:
|
|
logger.error(f"Failed to process data: {e}", exc_info=True)
|
|
raise
|
|
```
|
|
|
|
### 6. Async Logging Considerations
|
|
|
|
For async code, ensure thread-safe logging:
|
|
|
|
```python
|
|
import asyncio
|
|
import logging
|
|
|
|
async def async_operation():
|
|
logger = logging.getLogger(__name__)
|
|
logger.info(f"Starting async operation in task: {asyncio.current_task().get_name()}")
|
|
```
|
|
|
|
## Environment Variables
|
|
|
|
Support environment-based configuration as a fallback:
|
|
|
|
```python
|
|
import os
|
|
|
|
log_level = os.environ.get('TRUSTGRAPH_LOG_LEVEL', 'INFO')
|
|
```
|
|
|
|
## Testing
|
|
|
|
During tests, consider using a different logging configuration:
|
|
|
|
```python
|
|
# In test setup
|
|
logging.getLogger().setLevel(logging.WARNING) # Reduce noise during tests
|
|
```
|
|
|
|
## Monitoring Integration
|
|
|
|
Ensure log format is compatible with monitoring tools:
|
|
- Include timestamps in ISO format
|
|
- Use consistent field names
|
|
- Include correlation IDs where applicable
|
|
- Structure logs for easy parsing (JSON format for production)
|
|
|
|
## Security Considerations
|
|
|
|
- Never log sensitive information (passwords, API keys, personal data)
|
|
- Sanitize user input before logging
|
|
- Use placeholders for sensitive fields: `user_id=****1234`
|
|
|
|
## Migration Path
|
|
|
|
For existing code using print statements:
|
|
1. Replace `print()` with appropriate logger calls
|
|
2. Choose appropriate log levels based on message importance
|
|
3. Add context to make logs more useful
|
|
4. Test logging output at different levels |