* Rename all arch references to plano across the codebase
Complete rebrand from "Arch"/"archgw" to "Plano" including:
- Config files: arch_config_schema.yaml, workflow, demo configs
- Environment variables: ARCH_CONFIG_* → PLANO_CONFIG_*
- Python CLI: variables, functions, file paths, docker mounts
- Rust crates: config paths, log messages, metadata keys
- Docker/build: Dockerfile, supervisord, .dockerignore, .gitignore
- Docker Compose: volume mounts and env vars across all demos/tests
- GitHub workflows: job/step names
- Shell scripts: log messages
- Demos: Python code, READMEs, VS Code configs, Grafana dashboard
- Docs: RST includes, code comments, config references
- Package metadata: package.json, pyproject.toml, uv.lock
External URLs (docs.archgw.com, github.com/katanemo/archgw) left as-is.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Update remaining arch references in docs
- Rename RST cross-reference labels: arch_access_logging, arch_overview_tracing, arch_overview_threading → plano_*
- Update label references in request_lifecycle.rst
- Rename arch_config_state_storage_example.yaml → plano_config_state_storage_example.yaml
- Update config YAML comments: "Arch creates/uses" → "Plano creates/uses"
- Update "the Arch gateway" → "the Plano gateway" in configuration_reference.rst
- Update arch_config_schema.yaml reference in provider_models.py
- Rename arch_agent_router → plano_agent_router in config example
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Fix remaining arch references found in second pass
- config/docker-compose.dev.yaml: ARCH_CONFIG_FILE → PLANO_CONFIG_FILE,
arch_config.yaml → plano_config.yaml, archgw_logs → plano_logs
- config/test_passthrough.yaml: container mount path
- tests/e2e/docker-compose.yaml: source file path (was still arch_config.yaml)
- cli/planoai/core.py: comment and log message
- crates/brightstaff/src/tracing/constants.rs: doc comment
- tests/{e2e,archgw}/common.py: get_arch_messages → get_plano_messages,
arch_state/arch_messages variables renamed
- tests/{e2e,archgw}/test_prompt_gateway.py: updated imports and usages
- demos/shared/test_runner/{common,test_demos}.py: same renames
- tests/e2e/test_model_alias_routing.py: docstring
- .dockerignore: archgw_modelserver → plano_modelserver
- demos/use_cases/claude_code_router/pretty_model_resolution.sh: container name
Note: x-arch-* HTTP header values and Rust constant names intentionally
preserved for backwards compatibility with existing deployments.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
|
||
|---|---|---|
| .. | ||
| bench.py | ||
| evals_summarize.yaml | ||
| plano_config_with_aliases.yaml | ||
| pyproject.toml | ||
| README.md | ||
| run_demo.sh | ||
| uv.lock | ||
Model Choice Newsletter Demo
This folder demonstrates a practical workflow for rapid model adoption and safe model switching using Plano (plano). It includes both a minimal test harness and a sample proxy configuration.
Step-by-Step Walkthrough: Adopting New Models
Part 1 — Testing Infrastructure
Goal: Quickly evaluate candidate models for a task using a repeatable, automated harness.
1. Write Test Fixtures
Create a YAML file (evals_summarize.yaml) with real examples for your task. Each fixture includes:
input: The prompt or scenario.must_include: List of anchor words that must appear in the output.schema: The expected output schema.
Example:
# evals_summarize.yaml
task: summarize
fixtures:
- id: sum-001
input: "Thread about a billing dispute…"
must_include: ["invoice"]
schema: SummarizeOut
- id: sum-002
input: "Thread about a shipping delay…"
must_include: ["status"]
schema: SummarizeOut
2. Candidate Models
List the model aliases (e.g., arch.summarize.v1, arch.reason.v1) you want to test. The harness will route requests through plano, so you don’t need provider API keys in your code.
3. Minimal Python Harness
See bench.py for a complete example. It:
- Loads fixtures.
- Sends requests to each candidate model via
plano. - Validates output against schema and anchor words.
- Reports success rate and latency.
Example usage:
uv sync
python bench.py
Benchmarks:
- ≥90% schema-valid
- ≥80% anchors present
- Latency within SLO
- Cost within budget
Part 2 — Network Infrastructure
Goal: Use a proxy server (plano) to decouple your app from vendor-specific model names and centralize control.
Why Use a Proxy?
- Consistent API across providers
- Centralized key management
- Unified logging, metrics, and guardrails
- Intent-based model aliases (e.g.,
arch.summarize.v1) - Safe model promotions and rollbacks
- Central governance and observability
Example Proxy Config
See config.yaml for a sample configuration mapping aliases to provider models.
How to Run This Demo
-
Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh -
Install dependencies:
- Install all dependencies as described in the main Plano README (link)
- Then run
uv sync
-
Start Plano
run_demo.sh -
Run the test harness:
python bench.py
Files in This Folder
bench.py— Minimal Python test harnessevals_summarize.yaml— Example test fixturespyproject.toml— Python project configurationconfig.yaml— Sample plano config (if present)
Troubleshooting
- If you see
Success: 0/2 (0%), check your anchor words and prompt clarity. - Make sure plano is running and accessible at
http://localhost:12000/. - For schema validation errors, ensure your prompt instructs the model to output the correct JSON structure.