feat: workspace-based multi-tenancy, replacing user as tenancy axis (#840)

Introduces `workspace` as the isolation boundary for config, flows,
library, and knowledge data. Removes `user` as a schema-level field
throughout the code, API specs, and tests; workspace provides the
same separation more cleanly at the trusted flow.workspace layer
rather than through client-supplied message fields.

Design
------
- IAM tech spec (docs/tech-specs/iam.md) documents current state,
  proposed auth/access model, and migration direction.
- Data ownership model (docs/tech-specs/data-ownership-model.md)
  captures the workspace/collection/flow hierarchy.

Schema + messaging
------------------
- Drop `user` field from AgentRequest/Step, GraphRagQuery,
  DocumentRagQuery, Triples/Graph/Document/Row EmbeddingsRequest,
  Sparql/Rows/Structured QueryRequest, ToolServiceRequest.
- Keep collection/workspace routing via flow.workspace at the
  service layer.
- Translators updated to not serialise/deserialise user.

API specs
---------
- OpenAPI schemas and path examples cleaned of user fields.
- Websocket async-api messages updated.
- Removed the unused parameters/User.yaml.

Services + base
---------------
- Librarian, collection manager, knowledge, config: all operations
  scoped by workspace. Config client API takes workspace as first
  positional arg.
- `flow.workspace` set at flow start time by the infrastructure;
  no longer pass-through from clients.
- Tool service drops user-personalisation passthrough.

CLI + SDK
---------
- tg-init-workspace and workspace-aware import/export.
- All tg-* commands drop user args; accept --workspace.
- Python API/SDK (flow, socket_client, async_*, explainability,
  library) drop user kwargs from every method signature.

MCP server
----------
- All tool endpoints drop user parameters; socket_manager no longer
  keyed per user.

Flow service
------------
- Closure-based topic cleanup on flow stop: only delete topics
  whose blueprint template was parameterised AND no remaining
  live flow (across all workspaces) still resolves to that topic.
  Three scopes fall out naturally from template analysis:
    * {id} -> per-flow, deleted on stop
    * {blueprint} -> per-blueprint, kept while any flow of the
      same blueprint exists
    * {workspace} -> per-workspace, kept while any flow in the
      workspace exists
    * literal -> global, never deleted (e.g. tg.request.librarian)
  Fixes a bug where stopping a flow silently destroyed the global
  librarian exchange, wedging all library operations until manual
  restart.

RabbitMQ backend
----------------
- heartbeat=60, blocked_connection_timeout=300. Catches silently
  dead connections (broker restart, orphaned channels, network
  partitions) within ~2 heartbeat windows, so the consumer
  reconnects and re-binds its queue rather than sitting forever
  on a zombie connection.

Tests
-----
- Full test refresh: unit, integration, contract, provenance.
- Dropped user-field assertions and constructor kwargs across
  ~100 test files.
- Renamed user-collection isolation tests to workspace-collection.
This commit is contained in:
cybermaggedon 2026-04-21 23:23:01 +01:00 committed by GitHub
parent 9332089b3d
commit d35473f7f7
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
377 changed files with 6868 additions and 5785 deletions

View file

@ -72,7 +72,6 @@ def sample_message_data():
},
"DocumentRagQuery": {
"query": "What is artificial intelligence?",
"user": "test_user",
"collection": "test_collection",
"doc_limit": 10
},
@ -95,7 +94,6 @@ def sample_message_data():
},
"Metadata": {
"id": "test-doc-123",
"user": "test_user",
"collection": "test_collection"
},
"Term": {
@ -130,9 +128,8 @@ def invalid_message_data():
{}, # Missing required fields
],
"DocumentRagQuery": [
{"query": None, "user": "test", "collection": "test", "doc_limit": 10}, # Invalid query
{"query": "test", "user": None, "collection": "test", "doc_limit": 10}, # Invalid user
{"query": "test", "user": "test", "collection": "test", "doc_limit": -1}, # Invalid doc_limit
{"query": None, "collection": "test", "doc_limit": 10}, # Invalid query
{"query": "test", "collection": "test", "doc_limit": -1}, # Invalid doc_limit
{"query": "test"}, # Missing required fields
],
"Term": [

View file

@ -18,24 +18,18 @@ class TestDocumentEmbeddingsRequestContract:
def test_request_schema_fields(self):
"""Test that DocumentEmbeddingsRequest has expected fields"""
# Create a request
request = DocumentEmbeddingsRequest(
vector=[0.1, 0.2, 0.3],
limit=10,
user="test_user",
collection="test_collection"
)
# Verify all expected fields exist
assert hasattr(request, 'vector')
assert hasattr(request, 'limit')
assert hasattr(request, 'user')
assert hasattr(request, 'collection')
# Verify field values
assert request.vector == [0.1, 0.2, 0.3]
assert request.limit == 10
assert request.user == "test_user"
assert request.collection == "test_collection"
def test_request_translator_decode(self):
@ -45,7 +39,6 @@ class TestDocumentEmbeddingsRequestContract:
data = {
"vector": [0.1, 0.2, 0.3, 0.4],
"limit": 5,
"user": "custom_user",
"collection": "custom_collection"
}
@ -54,7 +47,6 @@ class TestDocumentEmbeddingsRequestContract:
assert isinstance(result, DocumentEmbeddingsRequest)
assert result.vector == [0.1, 0.2, 0.3, 0.4]
assert result.limit == 5
assert result.user == "custom_user"
assert result.collection == "custom_collection"
def test_request_translator_decode_with_defaults(self):
@ -63,7 +55,7 @@ class TestDocumentEmbeddingsRequestContract:
data = {
"vector": [0.1, 0.2]
# No limit, user, or collection provided
# No limit or collection provided
}
result = translator.decode(data)
@ -71,7 +63,6 @@ class TestDocumentEmbeddingsRequestContract:
assert isinstance(result, DocumentEmbeddingsRequest)
assert result.vector == [0.1, 0.2]
assert result.limit == 10 # Default
assert result.user == "trustgraph" # Default
assert result.collection == "default" # Default
def test_request_translator_encode(self):
@ -81,7 +72,6 @@ class TestDocumentEmbeddingsRequestContract:
request = DocumentEmbeddingsRequest(
vector=[0.5, 0.6],
limit=20,
user="test_user",
collection="test_collection"
)
@ -90,7 +80,6 @@ class TestDocumentEmbeddingsRequestContract:
assert isinstance(result, dict)
assert result["vector"] == [0.5, 0.6]
assert result["limit"] == 20
assert result["user"] == "test_user"
assert result["collection"] == "test_collection"
@ -219,7 +208,6 @@ class TestDocumentEmbeddingsMessageCompatibility:
request_data = {
"vector": [0.1, 0.2, 0.3],
"limit": 5,
"user": "test_user",
"collection": "test_collection"
}

View file

@ -132,7 +132,6 @@ class TestDocumentRagMessageContracts:
# Test required fields
query = DocumentRagQuery(**query_data)
assert hasattr(query, 'query')
assert hasattr(query, 'user')
assert hasattr(query, 'collection')
assert hasattr(query, 'doc_limit')
@ -154,12 +153,10 @@ class TestDocumentRagMessageContracts:
# Test valid query
valid_query = DocumentRagQuery(
query="What is AI?",
user="test_user",
collection="test_collection",
doc_limit=5
)
assert valid_query.query == "What is AI?"
assert valid_query.user == "test_user"
assert valid_query.collection == "test_collection"
assert valid_query.doc_limit == 5
@ -400,7 +397,6 @@ class TestMetadataMessageContracts:
metadata = Metadata(**metadata_data)
assert metadata.id == "test-doc-123"
assert metadata.user == "test_user"
assert metadata.collection == "test_collection"
def test_error_schema_contract(self):
@ -491,7 +487,7 @@ class TestSchemaEvolutionContracts:
required_fields = {
"TextCompletionRequest": ["system", "prompt"],
"TextCompletionResponse": ["error", "response", "model"],
"DocumentRagQuery": ["query", "user", "collection"],
"DocumentRagQuery": ["query", "collection"],
"DocumentRagResponse": ["error", "response"],
"AgentRequest": ["question", "history"],
"AgentResponse": ["error"],

View file

@ -18,7 +18,6 @@ class TestOrchestrationFieldContracts:
def test_agent_request_orchestration_fields_roundtrip(self):
req = AgentRequest(
question="Test question",
user="testuser",
collection="default",
correlation_id="corr-123",
parent_session_id="parent-sess",
@ -42,7 +41,6 @@ class TestOrchestrationFieldContracts:
def test_agent_request_orchestration_fields_default_empty(self):
req = AgentRequest(
question="Test question",
user="testuser",
)
assert req.correlation_id == ""
@ -82,7 +80,6 @@ class TestSubagentCompletionStepContract:
)
req = AgentRequest(
question="goal",
user="testuser",
correlation_id="corr-123",
history=[step],
)
@ -126,7 +123,6 @@ class TestSynthesisStepContract:
req = AgentRequest(
question="Original question",
user="testuser",
pattern="supervisor",
correlation_id="",
session_id="parent-sess",

View file

@ -22,7 +22,6 @@ class TestRowsCassandraContracts:
# Create test object with all required fields
test_metadata = Metadata(
id="test-doc-001",
user="test_user",
collection="test_collection",
)
@ -47,7 +46,6 @@ class TestRowsCassandraContracts:
# Verify metadata structure
assert hasattr(test_object.metadata, 'id')
assert hasattr(test_object.metadata, 'user')
assert hasattr(test_object.metadata, 'collection')
# Verify types
@ -150,7 +148,6 @@ class TestRowsCassandraContracts:
original = ExtractedObject(
metadata=Metadata(
id="serial-001",
user="test_user",
collection="test_coll",
),
schema_name="test_schema",
@ -168,7 +165,6 @@ class TestRowsCassandraContracts:
# Verify round-trip
assert decoded.metadata.id == original.metadata.id
assert decoded.metadata.user == original.metadata.user
assert decoded.metadata.collection == original.metadata.collection
assert decoded.schema_name == original.schema_name
assert decoded.values == original.values
@ -228,8 +224,7 @@ class TestRowsCassandraContracts:
# Create test object
test_obj = ExtractedObject(
metadata=Metadata(
id="meta-001",
user="user123", # -> keyspace
id="meta-001", # -> keyspace
collection="coll456", # -> partition key
),
schema_name="table789", # -> table name
@ -242,7 +237,6 @@ class TestRowsCassandraContracts:
# - metadata.user -> Cassandra keyspace
# - schema_name -> Cassandra table
# - metadata.collection -> Part of primary key
assert test_obj.metadata.user # Required for keyspace
assert test_obj.schema_name # Required for table
assert test_obj.metadata.collection # Required for partition key
@ -256,7 +250,6 @@ class TestRowsCassandraContractsBatch:
# Create test object with multiple values in batch
test_metadata = Metadata(
id="batch-doc-001",
user="test_user",
collection="test_collection",
)
@ -302,7 +295,6 @@ class TestRowsCassandraContractsBatch:
"""Test empty batch ExtractedObject contract"""
test_metadata = Metadata(
id="empty-batch-001",
user="test_user",
collection="test_collection",
)
@ -324,7 +316,6 @@ class TestRowsCassandraContractsBatch:
"""Test single-item batch (backward compatibility) contract"""
test_metadata = Metadata(
id="single-batch-001",
user="test_user",
collection="test_collection",
)
@ -353,7 +344,6 @@ class TestRowsCassandraContractsBatch:
original = ExtractedObject(
metadata=Metadata(
id="batch-serial-001",
user="test_user",
collection="test_coll",
),
schema_name="test_schema",
@ -375,7 +365,6 @@ class TestRowsCassandraContractsBatch:
# Verify round-trip for batch
assert decoded.metadata.id == original.metadata.id
assert decoded.metadata.user == original.metadata.user
assert decoded.metadata.collection == original.metadata.collection
assert decoded.schema_name == original.schema_name
assert len(decoded.values) == len(original.values)
@ -425,8 +414,7 @@ class TestRowsCassandraContractsBatch:
# 3. Be stored in the same keyspace (user)
test_metadata = Metadata(
id="partition-test-001",
user="consistent_user", # Same keyspace
id="partition-test-001", # Same keyspace
collection="consistent_collection", # Same partition
)
@ -443,7 +431,6 @@ class TestRowsCassandraContractsBatch:
)
# Verify consistency contract
assert batch_object.metadata.user # Must have user for keyspace
assert batch_object.metadata.collection # Must have collection for partition key
# Verify unique primary keys in batch

View file

@ -21,29 +21,25 @@ class TestRowsGraphQLQueryContracts:
"""Test RowsQueryRequest schema structure and required fields"""
# Create test request with all required fields
test_request = RowsQueryRequest(
user="test_user",
collection="test_collection",
query='{ customers { id name email } }',
variables={"status": "active", "limit": "10"},
operation_name="GetCustomers"
)
# Verify all required fields are present
assert hasattr(test_request, 'user')
assert hasattr(test_request, 'collection')
assert hasattr(test_request, 'collection')
assert hasattr(test_request, 'query')
assert hasattr(test_request, 'variables')
assert hasattr(test_request, 'operation_name')
# Verify field types
assert isinstance(test_request.user, str)
assert isinstance(test_request.collection, str)
assert isinstance(test_request.query, str)
assert isinstance(test_request.variables, dict)
assert isinstance(test_request.operation_name, str)
# Verify content
assert test_request.user == "test_user"
assert test_request.collection == "test_collection"
assert "customers" in test_request.query
assert test_request.variables["status"] == "active"
@ -53,15 +49,13 @@ class TestRowsGraphQLQueryContracts:
"""Test RowsQueryRequest with minimal required fields"""
# Create request with only essential fields
minimal_request = RowsQueryRequest(
user="user",
collection="collection",
query='{ test }',
variables={},
operation_name=""
)
# Verify minimal request is valid
assert minimal_request.user == "user"
assert minimal_request.collection == "collection"
assert minimal_request.query == '{ test }'
assert minimal_request.variables == {}
@ -187,22 +181,20 @@ class TestRowsGraphQLQueryContracts:
"""Test that request/response can be serialized/deserialized correctly"""
# Create original request
original_request = RowsQueryRequest(
user="serialization_test",
collection="test_data",
query='{ orders(limit: 5) { id total customer { name } } }',
variables={"limit": "5", "status": "active"},
operation_name="GetRecentOrders"
)
# Test request serialization using Pulsar schema
request_schema = AvroSchema(RowsQueryRequest)
# Encode and decode request
encoded_request = request_schema.encode(original_request)
decoded_request = request_schema.decode(encoded_request)
# Verify request round-trip
assert decoded_request.user == original_request.user
assert decoded_request.collection == original_request.collection
assert decoded_request.query == original_request.query
assert decoded_request.variables == original_request.variables
@ -245,7 +237,7 @@ class TestRowsGraphQLQueryContracts:
"""Test supported GraphQL query formats"""
# Test basic query
basic_query = RowsQueryRequest(
user="test", collection="test", query='{ customers { id } }',
collection="test", query='{ customers { id } }',
variables={}, operation_name=""
)
assert "customers" in basic_query.query
@ -254,7 +246,7 @@ class TestRowsGraphQLQueryContracts:
# Test query with variables
parameterized_query = RowsQueryRequest(
user="test", collection="test",
collection="test",
query='query GetCustomers($status: String, $limit: Int) { customers(status: $status, limit: $limit) { id name } }',
variables={"status": "active", "limit": "10"},
operation_name="GetCustomers"
@ -266,7 +258,7 @@ class TestRowsGraphQLQueryContracts:
# Test complex nested query
nested_query = RowsQueryRequest(
user="test", collection="test",
collection="test",
query='''
{
customers(limit: 10) {
@ -297,7 +289,7 @@ class TestRowsGraphQLQueryContracts:
# This test verifies the current contract, though ideally we'd support all JSON types
variables_test = RowsQueryRequest(
user="test", collection="test", query='{ test }',
collection="test", query='{ test }',
variables={
"string_var": "test_value",
"numeric_var": "123", # Numbers as strings due to Map(String()) limitation
@ -318,22 +310,18 @@ class TestRowsGraphQLQueryContracts:
def test_cassandra_context_fields_contract(self):
"""Test that request contains necessary fields for Cassandra operations"""
# Verify request has fields needed for Cassandra keyspace/table targeting
# Verify request has fields needed for partition key targeting
request = RowsQueryRequest(
user="keyspace_name", # Maps to Cassandra keyspace
collection="partition_collection", # Used in partition key
query='{ objects { id } }',
variables={}, operation_name=""
)
# These fields are required for proper Cassandra operations
assert request.user # Required for keyspace identification
assert request.collection # Required for partition key
# Required for partition key
assert request.collection
# Verify field naming follows TrustGraph patterns (matching other query services)
# This matches TriplesQueryRequest, DocumentEmbeddingsRequest patterns
assert hasattr(request, 'user') # Same as TriplesQueryRequest.user
assert hasattr(request, 'collection') # Same as TriplesQueryRequest.collection
assert hasattr(request, 'collection')
def test_graphql_extensions_contract(self):
"""Test GraphQL extensions field format and usage"""
@ -405,7 +393,7 @@ class TestRowsGraphQLQueryContracts:
# Request to execute specific operation
multi_op_request = RowsQueryRequest(
user="test", collection="test",
collection="test",
query=multi_op_query,
variables={},
operation_name="GetCustomers"
@ -418,7 +406,7 @@ class TestRowsGraphQLQueryContracts:
# Test single operation (operation_name optional)
single_op_request = RowsQueryRequest(
user="test", collection="test",
collection="test",
query='{ customers { id } }',
variables={}, operation_name=""
)

View file

@ -41,10 +41,11 @@ class TestSchemaFieldContracts:
def test_metadata_fields(self):
# NOTE: there is no `metadata` field. A previous regression
# constructed Metadata(metadata=...) and crashed at runtime.
# `user` was also dropped in the workspace refactor — workspace
# now flows via flow.workspace, not via message payload.
assert _field_names(Metadata) == {
"id",
"root",
"user",
"collection",
}

View file

@ -93,7 +93,6 @@ class TestStructuredDataSchemaContracts:
# Arrange
metadata = Metadata(
id="structured-data-001",
user="test_user",
collection="test_collection",
)
@ -118,7 +117,6 @@ class TestStructuredDataSchemaContracts:
# Arrange
metadata = Metadata(
id="extracted-obj-001",
user="test_user",
collection="test_collection",
)
@ -143,7 +141,6 @@ class TestStructuredDataSchemaContracts:
# Arrange
metadata = Metadata(
id="extracted-batch-001",
user="test_user",
collection="test_collection",
)
@ -177,7 +174,6 @@ class TestStructuredDataSchemaContracts:
# Arrange
metadata = Metadata(
id="extracted-empty-001",
user="test_user",
collection="test_collection",
)
@ -277,7 +273,6 @@ class TestStructuredEmbeddingsContracts:
# Arrange
metadata = Metadata(
id="struct-embed-001",
user="test_user",
collection="test_collection",
)
@ -308,7 +303,7 @@ class TestStructuredDataSerializationContracts:
def test_structured_data_submission_serialization(self):
"""Test StructuredDataSubmission serialization contract"""
# Arrange
metadata = Metadata(id="test", user="user", collection="col")
metadata = Metadata(id="test", collection="col")
submission_data = {
"metadata": metadata,
"format": "json",
@ -323,7 +318,7 @@ class TestStructuredDataSerializationContracts:
def test_extracted_object_serialization(self):
"""Test ExtractedObject serialization contract"""
# Arrange
metadata = Metadata(id="test", user="user", collection="col")
metadata = Metadata(id="test", collection="col")
object_data = {
"metadata": metadata,
"schema_name": "test_schema",
@ -373,7 +368,7 @@ class TestStructuredDataSerializationContracts:
def test_extracted_object_batch_serialization(self):
"""Test ExtractedObject batch serialization contract"""
# Arrange
metadata = Metadata(id="test", user="user", collection="col")
metadata = Metadata(id="test", collection="col")
batch_object_data = {
"metadata": metadata,
"schema_name": "test_schema",
@ -392,7 +387,7 @@ class TestStructuredDataSerializationContracts:
def test_extracted_object_empty_batch_serialization(self):
"""Test ExtractedObject empty batch serialization contract"""
# Arrange
metadata = Metadata(id="test", user="user", collection="col")
metadata = Metadata(id="test", collection="col")
empty_batch_data = {
"metadata": metadata,
"schema_name": "test_schema",