fix(routing): auto-migrate v0.3.0 inline routing_preferences to v0.4.0 top-level (#912)

* fix(routing): auto-migrate v0.3.0 inline routing_preferences to v0.4.0 top-level

Lift inline routing_preferences under each model_provider into the
top-level routing_preferences list with merged models[] and bump
version to v0.4.0, with a deprecation warning. Existing v0.3.0
demo configs (Claude Code, Codex, preference_based_routing, etc.)
keep working unchanged. Schema flags the inline shape as deprecated
but still accepts it. Docs and skills updated to canonical top-level
multi-model form.

* test(common): bump reference config assertion to v0.4.0

The rendered reference config was bumped to v0.4.0 when its inline
routing_preferences were lifted to the top level; align the
configuration deserialization test with that change.

* fix(config_generator): bump version to v0.4.0 up front in migration

Move the v0.3.0 -> v0.4.0 version bump to the top of
migrate_inline_routing_preferences so it runs unconditionally,
including for configs that already declare top-level
routing_preferences at v0.3.0. Previously the bump only fired
when inline migration produced entries, leaving top-level v0.3.0
configs rejected by brightstaff's v0.4.0 gate. Tests updated to
cover the new behavior and to confirm we never downgrade newer
versions.

* fix(config_generator): gate routing_preferences migration on version < v0.4.0

Short-circuit the migration when the config already declares v0.4.0
or newer. Anything at v0.4.0+ is assumed to be on the canonical
top-level shape and is passed through untouched, including stray
inline preferences (which are the author's bug to fix). Only v0.3.0
and older configs are rewritten and bumped.
This commit is contained in:
Musa 2026-04-24 12:31:44 -07:00 committed by GitHub
parent 5a652eb666
commit 897fda2deb
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
12 changed files with 748 additions and 225 deletions

View file

@ -1,7 +1,11 @@
import json
import pytest
import yaml
from unittest import mock
from planoai.config_generator import validate_and_render_schema
from planoai.config_generator import (
validate_and_render_schema,
migrate_inline_routing_preferences,
)
@pytest.fixture(autouse=True)
@ -295,32 +299,30 @@ model_providers:
"id": "duplicate_routeing_preference_name",
"expected_error": "Duplicate routing preference name",
"plano_config": """
version: v0.1.0
version: v0.4.0
listeners:
egress_traffic:
address: 0.0.0.0
- name: llm
type: model
port: 12000
message_format: openai
timeout: 30s
llm_providers:
model_providers:
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
routing_preferences:
- name: code understanding
description: understand and explain existing code snippets, functions, or libraries
- model: openai/gpt-4.1
access_key: $OPENAI_API_KEY
routing_preferences:
- name: code understanding
description: generating new code snippets, functions, or boilerplate based on user prompts or requirements
routing_preferences:
- name: code understanding
description: understand and explain existing code snippets, functions, or libraries
models:
- openai/gpt-4o
- name: code understanding
description: generating new code snippets, functions, or boilerplate based on user prompts or requirements
models:
- openai/gpt-4o-mini
tracing:
random_sampling: 100
@ -501,3 +503,238 @@ def test_convert_legacy_llm_providers_no_prompt_gateway():
"port": 12000,
"timeout": "30s",
}
def test_inline_routing_preferences_migrated_to_top_level():
plano_config = """
version: v0.3.0
listeners:
- type: model
name: model_listener
port: 12000
model_providers:
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
routing_preferences:
- name: code understanding
description: understand and explain existing code snippets, functions, or libraries
- model: anthropic/claude-sonnet-4-20250514
access_key: $ANTHROPIC_API_KEY
routing_preferences:
- name: code generation
description: generating new code snippets, functions, or boilerplate based on user prompts or requirements
"""
config_yaml = yaml.safe_load(plano_config)
migrate_inline_routing_preferences(config_yaml)
assert config_yaml["version"] == "v0.4.0"
for provider in config_yaml["model_providers"]:
assert "routing_preferences" not in provider
top_level = config_yaml["routing_preferences"]
by_name = {entry["name"]: entry for entry in top_level}
assert set(by_name) == {"code understanding", "code generation"}
assert by_name["code understanding"]["models"] == ["openai/gpt-4o"]
assert by_name["code generation"]["models"] == [
"anthropic/claude-sonnet-4-20250514"
]
assert (
by_name["code understanding"]["description"]
== "understand and explain existing code snippets, functions, or libraries"
)
def test_inline_same_name_across_providers_merges_models():
plano_config = """
version: v0.3.0
listeners:
- type: model
name: model_listener
port: 12000
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
routing_preferences:
- name: code generation
description: generating new code snippets, functions, or boilerplate based on user prompts or requirements
- model: anthropic/claude-sonnet-4-20250514
access_key: $ANTHROPIC_API_KEY
routing_preferences:
- name: code generation
description: generating new code snippets, functions, or boilerplate based on user prompts or requirements
"""
config_yaml = yaml.safe_load(plano_config)
migrate_inline_routing_preferences(config_yaml)
top_level = config_yaml["routing_preferences"]
assert len(top_level) == 1
entry = top_level[0]
assert entry["name"] == "code generation"
assert entry["models"] == [
"openai/gpt-4o",
"anthropic/claude-sonnet-4-20250514",
]
assert config_yaml["version"] == "v0.4.0"
def test_existing_top_level_routing_preferences_preserved():
plano_config = """
version: v0.4.0
listeners:
- type: model
name: model_listener
port: 12000
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
- model: anthropic/claude-sonnet-4-20250514
access_key: $ANTHROPIC_API_KEY
routing_preferences:
- name: code generation
description: generating new code snippets or boilerplate
models:
- openai/gpt-4o
- anthropic/claude-sonnet-4-20250514
"""
config_yaml = yaml.safe_load(plano_config)
before = yaml.safe_dump(config_yaml, sort_keys=True)
migrate_inline_routing_preferences(config_yaml)
after = yaml.safe_dump(config_yaml, sort_keys=True)
assert before == after
def test_existing_top_level_wins_over_inline_migration():
plano_config = """
version: v0.3.0
listeners:
- type: model
name: model_listener
port: 12000
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
routing_preferences:
- name: code generation
description: inline description should lose
routing_preferences:
- name: code generation
description: user-defined top-level description wins
models:
- openai/gpt-4o
"""
config_yaml = yaml.safe_load(plano_config)
migrate_inline_routing_preferences(config_yaml)
top_level = config_yaml["routing_preferences"]
assert len(top_level) == 1
entry = top_level[0]
assert entry["description"] == "user-defined top-level description wins"
assert entry["models"] == ["openai/gpt-4o"]
def test_wildcard_with_inline_routing_preferences_errors():
plano_config = """
version: v0.3.0
listeners:
- type: model
name: model_listener
port: 12000
model_providers:
- model: openrouter/*
base_url: https://openrouter.ai/api/v1
passthrough_auth: true
routing_preferences:
- name: code generation
description: generating code
"""
config_yaml = yaml.safe_load(plano_config)
with pytest.raises(Exception) as excinfo:
migrate_inline_routing_preferences(config_yaml)
assert "wildcard" in str(excinfo.value).lower()
def test_migration_bumps_version_even_without_inline_preferences():
plano_config = """
version: v0.3.0
listeners:
- type: model
name: model_listener
port: 12000
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
"""
config_yaml = yaml.safe_load(plano_config)
migrate_inline_routing_preferences(config_yaml)
assert "routing_preferences" not in config_yaml
assert config_yaml["version"] == "v0.4.0"
def test_migration_is_noop_on_v040_config_with_stray_inline_preferences():
# v0.4.0 configs are assumed to be on the canonical top-level shape.
# The migration intentionally does not rescue stray inline preferences
# at v0.4.0+ so that the deprecation boundary is a clean version gate.
plano_config = """
version: v0.4.0
listeners:
- type: model
name: model_listener
port: 12000
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
routing_preferences:
- name: code generation
description: generating new code
"""
config_yaml = yaml.safe_load(plano_config)
migrate_inline_routing_preferences(config_yaml)
assert config_yaml["version"] == "v0.4.0"
assert "routing_preferences" not in config_yaml
assert config_yaml["model_providers"][0]["routing_preferences"] == [
{"name": "code generation", "description": "generating new code"}
]
def test_migration_does_not_downgrade_newer_versions():
plano_config = """
version: v0.5.0
listeners:
- type: model
name: model_listener
port: 12000
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
"""
config_yaml = yaml.safe_load(plano_config)
migrate_inline_routing_preferences(config_yaml)
assert config_yaml["version"] == "v0.5.0"