Minor agent tweaks (#692)

Update RAG and Agent clients for streaming message handling

GraphRAG now sends multiple message types in a stream:
- 'explain' messages with explain_id and explain_graph for
  provenance
- 'chunk' messages with response text fragments
- end_of_session marker for stream completion

Updated all clients to handle this properly:

CLI clients (trustgraph-base/trustgraph/clients/):
- graph_rag_client.py: Added chunk_callback and explain_callback
- document_rag_client.py: Added chunk_callback and explain_callback
- agent_client.py: Added think, observe, answer_callback,
  error_callback

Internal clients (trustgraph-base/trustgraph/base/):
- graph_rag_client.py: Async callbacks for streaming
- agent_client.py: Async callbacks for streaming

All clients now:
- Route messages by chunk_type/message_type
- Stream via optional callbacks for incremental delivery
- Wait for proper completion signals
(end_of_dialog/end_of_session/end_of_stream)
- Accumulate and return complete response for callers not using
  callbacks

Updated callers:
- extract/kg/agent/extract.py: Uses new invoke(question=...) API
- tests/integration/test_agent_kg_extraction_integration.py:
  Updated mocks

This fixes the agent infinite loop issue where knowledge_query was
returning the first 'explain' message (empty response) instead of
waiting for the actual answer chunks.

Concurrency in triples query
This commit is contained in:
cybermaggedon 2026-03-12 17:59:02 +00:00 committed by GitHub
parent 45e6ad4abc
commit aecf00f040
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
8 changed files with 246 additions and 58 deletions

View file

@ -183,24 +183,8 @@ class Processor(FlowProcessor):
logger.debug(f"Agent prompt: {prompt}")
async def handle(response):
logger.debug(f"Agent response: {response}")
if response.error is not None:
if response.error.message:
raise RuntimeError(str(response.error.message))
else:
raise RuntimeError(str(response.error))
if response.answer is not None:
return True
else:
return False
# Send to agent API
agent_response = await flow("agent-request").invoke(
recipient = handle,
question = prompt
)