Rename all arch references to plano (#745)

* Rename all arch references to plano across the codebase

Complete rebrand from "Arch"/"archgw" to "Plano" including:
- Config files: arch_config_schema.yaml, workflow, demo configs
- Environment variables: ARCH_CONFIG_* → PLANO_CONFIG_*
- Python CLI: variables, functions, file paths, docker mounts
- Rust crates: config paths, log messages, metadata keys
- Docker/build: Dockerfile, supervisord, .dockerignore, .gitignore
- Docker Compose: volume mounts and env vars across all demos/tests
- GitHub workflows: job/step names
- Shell scripts: log messages
- Demos: Python code, READMEs, VS Code configs, Grafana dashboard
- Docs: RST includes, code comments, config references
- Package metadata: package.json, pyproject.toml, uv.lock

External URLs (docs.archgw.com, github.com/katanemo/archgw) left as-is.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update remaining arch references in docs

- Rename RST cross-reference labels: arch_access_logging, arch_overview_tracing, arch_overview_threading → plano_*
- Update label references in request_lifecycle.rst
- Rename arch_config_state_storage_example.yaml → plano_config_state_storage_example.yaml
- Update config YAML comments: "Arch creates/uses" → "Plano creates/uses"
- Update "the Arch gateway" → "the Plano gateway" in configuration_reference.rst
- Update arch_config_schema.yaml reference in provider_models.py
- Rename arch_agent_router → plano_agent_router in config example

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fix remaining arch references found in second pass

- config/docker-compose.dev.yaml: ARCH_CONFIG_FILE → PLANO_CONFIG_FILE,
  arch_config.yaml → plano_config.yaml, archgw_logs → plano_logs
- config/test_passthrough.yaml: container mount path
- tests/e2e/docker-compose.yaml: source file path (was still arch_config.yaml)
- cli/planoai/core.py: comment and log message
- crates/brightstaff/src/tracing/constants.rs: doc comment
- tests/{e2e,archgw}/common.py: get_arch_messages → get_plano_messages,
  arch_state/arch_messages variables renamed
- tests/{e2e,archgw}/test_prompt_gateway.py: updated imports and usages
- demos/shared/test_runner/{common,test_demos}.py: same renames
- tests/e2e/test_model_alias_routing.py: docstring
- .dockerignore: archgw_modelserver → plano_modelserver
- demos/use_cases/claude_code_router/pretty_model_resolution.sh: container name

Note: x-arch-* HTTP header values and Rust constant names intentionally
preserved for backwards compatibility with existing deployments.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Adil Hafeez 2026-02-13 15:16:56 -08:00 committed by GitHub
parent 0557f7ff98
commit ba651aaf71
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
115 changed files with 504 additions and 505 deletions

View file

@ -6,14 +6,14 @@ To be able to run e2e tests successfully run_e2e_script prepares environment in
1. build and start weather_forecast demo (using docker compose)
1. build, install and start model server async (using uv)
1. build and start arch gateway (using docker compose)
1. build and start Plano gateway (using docker compose)
1. wait for model server to be ready
1. wait for arch gateway to be ready
1. wait for Plano gateway to be ready
1. start e2e tests (using uv)
1. runs llm gateway tests for llm routing
2. runs prompt gateway tests to test function calling, parameter gathering and summarization
2. cleanup
1. stops arch gateway
1. stops Plano gateway
2. stops model server
3. stops weather_forecast demo

View file

@ -98,17 +98,17 @@ def get_data_chunks(stream, n=1):
return chunks
def get_arch_messages(response_json):
arch_messages = []
def get_plano_messages(response_json):
plano_messages = []
if response_json and "metadata" in response_json:
# load arch_state from metadata
arch_state_str = response_json.get("metadata", {}).get(ARCH_STATE_HEADER, "{}")
# parse arch_state into json object
arch_state = json.loads(arch_state_str)
# load messages from arch_state
arch_messages_str = arch_state.get("messages", "[]")
# load plano_state from metadata
plano_state_str = response_json.get("metadata", {}).get(ARCH_STATE_HEADER, "{}")
# parse plano_state into json object
plano_state = json.loads(plano_state_str)
# load messages from plano_state
plano_messages_str = plano_state.get("messages", "[]")
# parse messages into json object
arch_messages = json.loads(arch_messages_str)
# append messages from arch gateway to history
return arch_messages
plano_messages = json.loads(plano_messages_str)
# append messages from plano gateway to history
return plano_messages
return []

View file

@ -8,7 +8,7 @@ services:
- "12000:12000"
- "19901:9901"
volumes:
- ../../demos/samples_python/weather_forecast/arch_config.yaml:/app/arch_config.yaml
- ../../demos/samples_python/weather_forecast/plano_config.yaml:/app/plano_config.yaml
- /etc/ssl/cert.pem:/etc/ssl/cert.pem
extra_hosts:
- "host.docker.internal:host-gateway"

View file

@ -34,7 +34,7 @@ uv sync
uv tool install .
cd -
log building docker image for arch gateway
log building docker image for plano gateway
log ======================================
cd ../../
planoai build
@ -43,7 +43,7 @@ cd -
# Once we build plano we have to install the dependencies again to a new virtual environment.
uv sync
log startup arch gateway with function calling demo
log startup plano gateway with function calling demo
cd ../../
planoai down
planoai up demos/samples_python/weather_forecast/config.yaml
@ -53,11 +53,11 @@ log running e2e tests for prompt gateway
log ====================================
uv run pytest test_prompt_gateway.py
log shutting down the arch gateway service for prompt_gateway demo
log shutting down the plano gateway service for prompt_gateway demo
log ===============================================================
planoai down
log startup arch gateway with model alias routing demo
log startup plano gateway with model alias routing demo
cd ../../
planoai up demos/use_cases/model_alias_routing/config_with_aliases.yaml
cd -
@ -70,7 +70,7 @@ log running e2e tests for openai responses api client
log ========================================
uv run pytest test_openai_responses_api_client.py
log startup arch gateway with state storage for openai responses api client demo
log startup plano gateway with state storage for openai responses api client demo
planoai down
planoai up config_memory_state_v1_responses.yaml

View file

@ -34,7 +34,7 @@ cd -
uv sync
# Start gateway with model alias routing config
log "startup arch gateway with model alias routing demo"
log "startup plano gateway with model alias routing demo"
cd ../../
planoai down || true
planoai up demos/use_cases/model_alias_routing/config_with_aliases.yaml

View file

@ -39,7 +39,7 @@ docker compose up weather_forecast_service --build -d
cd -
# Start gateway with prompt_gateway config
log "startup arch gateway with function calling demo"
log "startup plano gateway with function calling demo"
cd ../../
planoai down || true
planoai up demos/samples_python/weather_forecast/config.yaml

View file

@ -33,7 +33,7 @@ cd -
uv sync
# Start gateway with state storage config
log "startup arch gateway with state storage config"
log "startup plano gateway with state storage config"
cd ../../
planoai down || true
planoai up tests/e2e/config_memory_state_v1_responses.yaml

View file

@ -260,7 +260,7 @@ def test_anthropic_client_with_alias_streaming():
def test_400_error_handling_with_alias():
"""Test that 400 errors from upstream are properly returned by archgw"""
"""Test that 400 errors from upstream are properly returned by plano"""
logger.info(
"Testing 400 error handling with arch.summarize.v1 and invalid parameter"
)

View file

@ -10,7 +10,7 @@ from common import (
PROMPT_GATEWAY_ENDPOINT,
LLM_GATEWAY_ENDPOINT,
PREFILL_LIST,
get_arch_messages,
get_plano_messages,
get_data_chunks,
)
@ -117,11 +117,11 @@ def test_prompt_gateway(stream):
assert len(choices) > 0
assert "role" in choices[0]["message"]
assert choices[0]["message"]["role"] == "assistant"
# now verify arch_messages (tool call and api response) that are sent as response metadata
arch_messages = get_arch_messages(response_json)
print("arch_messages: ", json.dumps(arch_messages))
assert len(arch_messages) == 2
tool_calls_message = arch_messages[0]
# now verify plano_messages (tool call and api response) that are sent as response metadata
plano_messages = get_plano_messages(response_json)
print("plano_messages: ", json.dumps(plano_messages))
assert len(plano_messages) == 2
tool_calls_message = plano_messages[0]
print("tool_calls_message: ", tool_calls_message)
tool_calls = tool_calls_message.get("content", [])
cleaned_tool_call_str = cleanup_tool_call(tool_calls)
@ -295,10 +295,10 @@ def test_prompt_gateway_param_tool_call(stream):
assert len(choices) > 0
assert "role" in choices[0]["message"]
assert choices[0]["message"]["role"] == "assistant"
# now verify arch_messages (tool call and api response) that are sent as response metadata
arch_messages = get_arch_messages(response_json)
assert len(arch_messages) == 2
tool_calls_message = arch_messages[0]
# now verify plano_messages (tool call and api response) that are sent as response metadata
plano_messages = get_plano_messages(response_json)
assert len(plano_messages) == 2
tool_calls_message = plano_messages[0]
tool_calls = tool_calls_message.get("tool_calls", [])
assert len(tool_calls) > 0
tool_call = normalize_tool_call_arguments(tool_calls[0]["function"])