GraphRAG Query-Time Explainability (#677)

Implements full explainability pipeline for GraphRAG queries, enabling
traceability from answers back to source documents.

Renamed throughout for clarity:
- provenance_callback → explain_callback
- provenance_id → explain_id
- provenance_collection → explain_collection
- message_type "provenance" → "explain"
- Queue name "provenance" → "explainability"

GraphRAG queries now emit explainability events as they execute:
1. Session - query text and timestamp
2. Retrieval - edges retrieved from subgraph
3. Selection - selected edges with LLM reasoning (JSONL with id +
   reasoning)
4. Answer - reference to synthesized response

Events stream via explain_callback during query(), enabling
real-time UX.

- Answers stored in librarian service (not inline in graph - too large)
- Document ID as URN: urn:trustgraph:answer:{session_id}
- Graph stores tg:document reference (IRI) to librarian document
- Added librarian producer/consumer to graph-rag service

- get_labelgraph() now returns (labeled_edges, uri_map)
- uri_map maps edge_id(label_s, label_p, label_o) →
  (uri_s, uri_p, uri_o)
- Explainability data stores original URIs, not labels
- Enables tracing edges back to reifying statements via tg:reifies

- Added serialize_triple() to query service (matches storage format)
- get_term_value() now handles TRIPLE type terms
- Enables querying by quoted triple in object position:
  ?stmt tg:reifies <<s p o>>

- Displays real-time explainability events during query
- Resolves rdfs:label for edge components (s, p, o)
- Traces source chain via prov:wasDerivedFrom to root document
- Output: "Source: Chunk 1 → Page 2 → Document Title"
- Label caching to avoid repeated queries

GraphRagResponse:
- explain_id: str | None
- explain_collection: str | None
- message_type: str ("chunk" or "explain")
- end_of_session: bool

trustgraph-base/trustgraph/provenance/:
- namespaces.py - Added TG_DOCUMENT predicate
- triples.py - answer_triples() supports document_id reference
- uris.py - Added edge_selection_uri()

trustgraph-base/trustgraph/schema/services/retrieval.py:
- GraphRagResponse with explain_id, explain_collection, end_of_session

trustgraph-flow/trustgraph/retrieval/graph_rag/:
- graph_rag.py - URI preservation, streaming answer accumulation
- rag.py - Librarian integration, real-time explain emission

trustgraph-flow/trustgraph/query/triples/cassandra/service.py:
- Quoted triple serialization for query matching

trustgraph-cli/trustgraph/cli/invoke_graph_rag.py:
- Full explainability display with label resolution and source tracing
This commit is contained in:
cybermaggedon 2026-03-10 10:00:01 +00:00 committed by GitHub
parent d2d71f859d
commit 7a6197d8c3
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
24 changed files with 2001 additions and 323 deletions

View file

@ -540,41 +540,68 @@ class TestQuery:
query.maybe_label = AsyncMock(side_effect=mock_maybe_label)
# Call get_labelgraph
result = await query.get_labelgraph("test query")
labeled_edges, uri_map = await query.get_labelgraph("test query")
# Verify get_subgraph was called
query.get_subgraph.assert_called_once_with("test query")
# Verify label triples are filtered out
assert len(result) == 2 # Label triple should be excluded
assert len(labeled_edges) == 2 # Label triple should be excluded
# Verify maybe_label was called for non-label triples
expected_calls = [
(("entity1",), {}), (("predicate1",), {}), (("object1",), {}),
(("entity3",), {}), (("predicate3",), {}), (("object3",), {})
]
assert query.maybe_label.call_count == 6
# Verify result contains human-readable labels
expected_result = [
expected_edges = [
("Human Entity One", "Human Predicate One", "Human Object One"),
("Human Entity Three", "Human Predicate Three", "Human Object Three")
]
assert result == expected_result
assert labeled_edges == expected_edges
# Verify uri_map maps labeled edges back to original URIs
assert len(uri_map) == 2
@pytest.mark.asyncio
async def test_graph_rag_query_method(self):
"""Test GraphRag.query method orchestrates full RAG pipeline"""
"""Test GraphRag.query method orchestrates full RAG pipeline with real-time provenance"""
import json
from trustgraph.retrieval.graph_rag.graph_rag import edge_id
# Create mock clients
mock_prompt_client = AsyncMock()
mock_embeddings_client = AsyncMock()
mock_graph_embeddings_client = AsyncMock()
mock_triples_client = AsyncMock()
# Mock prompt client response
# Mock prompt client responses for two-step process
expected_response = "This is the RAG response"
mock_prompt_client.kg_prompt.return_value = expected_response
test_labelgraph = [("Subject", "Predicate", "Object")]
# Compute the edge ID for the test edge
test_edge_id = edge_id("Subject", "Predicate", "Object")
# Create uri_map for the test edge (maps labeled edge ID to original URIs)
test_uri_map = {
test_edge_id: ("http://example.org/subject", "http://example.org/predicate", "http://example.org/object")
}
# Mock edge selection response (JSONL format)
edge_selection_response = json.dumps({"id": test_edge_id, "reasoning": "relevant"})
# Configure prompt mock to return different responses based on prompt name
async def mock_prompt(prompt_name, variables=None, streaming=False, chunk_callback=None):
if prompt_name == "kg-edge-selection":
return edge_selection_response
elif prompt_name == "kg-synthesis":
return expected_response
return ""
mock_prompt_client.prompt = mock_prompt
# Initialize GraphRag
graph_rag = GraphRag(
prompt_client=mock_prompt_client,
@ -583,39 +610,55 @@ class TestQuery:
triples_client=mock_triples_client,
verbose=False
)
# Mock the Query class behavior by patching get_labelgraph
test_labelgraph = [("Subject", "Predicate", "Object")]
# We need to patch the Query class's get_labelgraph method
original_query_init = Query.__init__
original_get_labelgraph = Query.get_labelgraph
def mock_query_init(self, *args, **kwargs):
original_query_init(self, *args, **kwargs)
async def mock_get_labelgraph(self, query_text):
return test_labelgraph
return test_labelgraph, test_uri_map
Query.__init__ = mock_query_init
Query.get_labelgraph = mock_get_labelgraph
# Collect provenance emitted via callback
provenance_events = []
async def collect_provenance(triples, prov_id):
provenance_events.append((triples, prov_id))
try:
# Call GraphRag.query
result = await graph_rag.query(
# Call GraphRag.query with provenance callback
response = await graph_rag.query(
query="test query",
user="test_user",
collection="test_collection",
entity_limit=25,
triple_limit=15
triple_limit=15,
explain_callback=collect_provenance
)
# Verify prompt client was called with knowledge graph and query
mock_prompt_client.kg_prompt.assert_called_once_with("test query", test_labelgraph)
# Verify result
assert result == expected_response
# Verify response text
assert response == expected_response
# Verify provenance was emitted incrementally (4 events: session, retrieval, selection, answer)
assert len(provenance_events) == 4
# Verify each event has triples and a URN
for triples, prov_id in provenance_events:
assert isinstance(triples, list)
assert len(triples) > 0
assert prov_id.startswith("urn:trustgraph:")
# Verify order: session, retrieval, selection, answer
assert "session" in provenance_events[0][1]
assert "retrieval" in provenance_events[1][1]
assert "selection" in provenance_events[2][1]
assert "answer" in provenance_events[3][1]
finally:
# Restore original methods
Query.__init__ = original_query_init