mirror of
https://github.com/trustgraph-ai/trustgraph.git
synced 2026-05-03 12:22:37 +02:00
Minor agent tweaks (#692)
Update RAG and Agent clients for streaming message handling GraphRAG now sends multiple message types in a stream: - 'explain' messages with explain_id and explain_graph for provenance - 'chunk' messages with response text fragments - end_of_session marker for stream completion Updated all clients to handle this properly: CLI clients (trustgraph-base/trustgraph/clients/): - graph_rag_client.py: Added chunk_callback and explain_callback - document_rag_client.py: Added chunk_callback and explain_callback - agent_client.py: Added think, observe, answer_callback, error_callback Internal clients (trustgraph-base/trustgraph/base/): - graph_rag_client.py: Async callbacks for streaming - agent_client.py: Async callbacks for streaming All clients now: - Route messages by chunk_type/message_type - Stream via optional callbacks for incremental delivery - Wait for proper completion signals (end_of_dialog/end_of_session/end_of_stream) - Accumulate and return complete response for callers not using callbacks Updated callers: - extract/kg/agent/extract.py: Uses new invoke(question=...) API - tests/integration/test_agent_kg_extraction_integration.py: Updated mocks This fixes the agent infinite loop issue where knowledge_query was returning the first 'explain' message (empty response) instead of waiting for the actual answer chunks. Concurrency in triples query
This commit is contained in:
parent
45e6ad4abc
commit
aecf00f040
8 changed files with 246 additions and 58 deletions
|
|
@ -183,24 +183,8 @@ class Processor(FlowProcessor):
|
|||
|
||||
logger.debug(f"Agent prompt: {prompt}")
|
||||
|
||||
async def handle(response):
|
||||
|
||||
logger.debug(f"Agent response: {response}")
|
||||
|
||||
if response.error is not None:
|
||||
if response.error.message:
|
||||
raise RuntimeError(str(response.error.message))
|
||||
else:
|
||||
raise RuntimeError(str(response.error))
|
||||
|
||||
if response.answer is not None:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
# Send to agent API
|
||||
agent_response = await flow("agent-request").invoke(
|
||||
recipient = handle,
|
||||
question = prompt
|
||||
)
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue