plano/demos/use_cases/model_choice_with_test_harness
Adil Hafeez ba651aaf71
Rename all arch references to plano (#745)
* Rename all arch references to plano across the codebase

Complete rebrand from "Arch"/"archgw" to "Plano" including:
- Config files: arch_config_schema.yaml, workflow, demo configs
- Environment variables: ARCH_CONFIG_* → PLANO_CONFIG_*
- Python CLI: variables, functions, file paths, docker mounts
- Rust crates: config paths, log messages, metadata keys
- Docker/build: Dockerfile, supervisord, .dockerignore, .gitignore
- Docker Compose: volume mounts and env vars across all demos/tests
- GitHub workflows: job/step names
- Shell scripts: log messages
- Demos: Python code, READMEs, VS Code configs, Grafana dashboard
- Docs: RST includes, code comments, config references
- Package metadata: package.json, pyproject.toml, uv.lock

External URLs (docs.archgw.com, github.com/katanemo/archgw) left as-is.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update remaining arch references in docs

- Rename RST cross-reference labels: arch_access_logging, arch_overview_tracing, arch_overview_threading → plano_*
- Update label references in request_lifecycle.rst
- Rename arch_config_state_storage_example.yaml → plano_config_state_storage_example.yaml
- Update config YAML comments: "Arch creates/uses" → "Plano creates/uses"
- Update "the Arch gateway" → "the Plano gateway" in configuration_reference.rst
- Update arch_config_schema.yaml reference in provider_models.py
- Rename arch_agent_router → plano_agent_router in config example

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fix remaining arch references found in second pass

- config/docker-compose.dev.yaml: ARCH_CONFIG_FILE → PLANO_CONFIG_FILE,
  arch_config.yaml → plano_config.yaml, archgw_logs → plano_logs
- config/test_passthrough.yaml: container mount path
- tests/e2e/docker-compose.yaml: source file path (was still arch_config.yaml)
- cli/planoai/core.py: comment and log message
- crates/brightstaff/src/tracing/constants.rs: doc comment
- tests/{e2e,archgw}/common.py: get_arch_messages → get_plano_messages,
  arch_state/arch_messages variables renamed
- tests/{e2e,archgw}/test_prompt_gateway.py: updated imports and usages
- demos/shared/test_runner/{common,test_demos}.py: same renames
- tests/e2e/test_model_alias_routing.py: docstring
- .dockerignore: archgw_modelserver → plano_modelserver
- demos/use_cases/claude_code_router/pretty_model_resolution.sh: container name

Note: x-arch-* HTTP header values and Rust constant names intentionally
preserved for backwards compatibility with existing deployments.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 15:16:56 -08:00
..
bench.py Rename all arch references to plano (#745) 2026-02-13 15:16:56 -08:00
evals_summarize.yaml adding code snippets in a single place for newsletter (#569) 2025-09-17 01:06:06 -07:00
plano_config_with_aliases.yaml Rename all arch references to plano (#745) 2026-02-13 15:16:56 -08:00
pyproject.toml Rename all arch references to plano (#745) 2026-02-13 15:16:56 -08:00
README.md Rename all arch references to plano (#745) 2026-02-13 15:16:56 -08:00
run_demo.sh Rename all arch references to plano (#745) 2026-02-13 15:16:56 -08:00
uv.lock release 0.4.2 (#679) 2026-01-07 13:02:06 -08:00

Model Choice Newsletter Demo

This folder demonstrates a practical workflow for rapid model adoption and safe model switching using Plano (plano). It includes both a minimal test harness and a sample proxy configuration.


Step-by-Step Walkthrough: Adopting New Models

Part 1 — Testing Infrastructure

Goal: Quickly evaluate candidate models for a task using a repeatable, automated harness.

1. Write Test Fixtures

Create a YAML file (evals_summarize.yaml) with real examples for your task. Each fixture includes:

  • input: The prompt or scenario.
  • must_include: List of anchor words that must appear in the output.
  • schema: The expected output schema.

Example:

# evals_summarize.yaml
task: summarize
fixtures:
  - id: sum-001
    input: "Thread about a billing dispute…"
    must_include: ["invoice"]
    schema: SummarizeOut
  - id: sum-002
    input: "Thread about a shipping delay…"
    must_include: ["status"]
    schema: SummarizeOut

2. Candidate Models

List the model aliases (e.g., arch.summarize.v1, arch.reason.v1) you want to test. The harness will route requests through plano, so you dont need provider API keys in your code.

3. Minimal Python Harness

See bench.py for a complete example. It:

  • Loads fixtures.
  • Sends requests to each candidate model via plano.
  • Validates output against schema and anchor words.
  • Reports success rate and latency.

Example usage:

uv sync
python bench.py

Benchmarks:

  • ≥90% schema-valid
  • ≥80% anchors present
  • Latency within SLO
  • Cost within budget

Part 2 — Network Infrastructure

Goal: Use a proxy server (plano) to decouple your app from vendor-specific model names and centralize control.

Why Use a Proxy?

  • Consistent API across providers
  • Centralized key management
  • Unified logging, metrics, and guardrails
  • Intent-based model aliases (e.g., arch.summarize.v1)
  • Safe model promotions and rollbacks
  • Central governance and observability

Example Proxy Config

See config.yaml for a sample configuration mapping aliases to provider models.


How to Run This Demo

  1. Install uv (if not already installed):

    curl -LsSf https://astral.sh/uv/install.sh | sh
    
  2. Install dependencies:

  • Install all dependencies as described in the main Plano README (link)
  • Then run
    uv sync
    
  1. Start Plano

     run_demo.sh
    
  2. Run the test harness:

    python bench.py
    

Files in This Folder

  • bench.py — Minimal Python test harness
  • evals_summarize.yaml — Example test fixtures
  • pyproject.toml — Python project configuration
  • config.yaml — Sample plano config (if present)

Troubleshooting

  • If you see Success: 0/2 (0%), check your anchor words and prompt clarity.
  • Make sure plano is running and accessible at http://localhost:12000/.
  • For schema validation errors, ensure your prompt instructs the model to output the correct JSON structure.