merge main into plano-session_pinning

This commit is contained in:
Adil Hafeez 2026-04-02 22:58:47 -07:00
commit f699cfb059
86 changed files with 11996 additions and 8063 deletions

View file

@ -18,37 +18,40 @@ Plano is an AI-native proxy and data plane for agentic apps — with built-in or
## How Routing Works
The entire routing configuration is plain YAML — no code:
Routing is configured in top-level `routing_preferences` (requires `version: v0.4.0`):
```yaml
model_providers:
- model: openai/gpt-4o-mini
default: true # fallback for unmatched requests
version: v0.4.0
- model: openai/gpt-4o
routing_preferences:
- name: complex_reasoning
description: complex reasoning tasks, multi-step analysis
routing_preferences:
- name: complex_reasoning
description: complex reasoning tasks, multi-step analysis, or detailed explanations
models:
- openai/gpt-4o
- openai/gpt-4o-mini
- model: anthropic/claude-sonnet-4-20250514
routing_preferences:
- name: code_generation
description: generating new code, writing functions
- name: code_generation
description: generating new code, writing functions, or creating boilerplate
models:
- anthropic/claude-sonnet-4-20250514
- openai/gpt-4o
```
When a request arrives, Plano sends the conversation and routing preferences to Arch-Router, which classifies the intent and returns the matching route:
When a request arrives, Plano:
1. Sends the conversation + route descriptions to Arch-Router for intent classification
2. Looks up the matched route and returns its candidate models
3. Returns an ordered list — client uses `models[0]`, falls back to `models[1]` on 429/5xx
```
1. Request arrives → "Write binary search in Python"
2. Preferences serialized → [{"name":"code_generation", ...}, {"name":"complex_reasoning", ...}]
3. Arch-Router classifies → {"route": "code_generation"}
4. Route → Model lookup → code_generation → anthropic/claude-sonnet-4-20250514
5. Request forwarded → Claude generates the response
2. Arch-Router classifies → route: "code_generation"
3. Response → models: ["anthropic/claude-sonnet-4-20250514", "openai/gpt-4o"]
```
No match? Arch-Router returns `other` → Plano falls back to the default model.
No match? Arch-Router returns `null` route → client falls back to the model in the original request.
The `/routing/v1/*` endpoints return the routing decision **without** forwarding to the LLM — useful for testing and validating routing behavior before going to production.
The `/routing/v1/*` endpoints return the routing decision **without** forwarding to the LLM — useful for testing routing behavior before going to production.
## Setup
@ -60,9 +63,9 @@ export ANTHROPIC_API_KEY=<your-key>
```
Start Plano:
```bash
cd demos/llm_routing/model_routing_service
planoai up config.yaml
planoai up demos/llm_routing/model_routing_service/config.yaml
```
## Run the demo
@ -95,13 +98,13 @@ curl http://localhost:12000/routing/v1/chat/completions \
Response:
```json
{
"model": "anthropic/claude-sonnet-4-20250514",
"models": ["anthropic/claude-sonnet-4-20250514", "openai/gpt-4o"],
"route": "code_generation",
"trace_id": "c16d1096c1af4a17abb48fb182918a88"
}
```
The response tells you which model would handle this request and which route was matched, without actually making the LLM call.
The response contains the model list — your client should try `models[0]` first and fall back to `models[1]` on 429 or 5xx errors.
## Session Pinning
@ -176,7 +179,6 @@ GPU nodes commonly have a `nvidia.com/gpu:NoSchedule` taint — `vllm-deployment
**1. Deploy Arch-Router and Plano:**
```bash
# arch-router deployment
kubectl apply -f vllm-deployment.yaml
@ -223,6 +225,7 @@ kubectl create configmap plano-config \
kubectl rollout restart deployment/plano
```
## Demo Output
```
@ -230,35 +233,35 @@ kubectl rollout restart deployment/plano
--- 1. Code generation query (OpenAI format) ---
{
"model": "anthropic/claude-sonnet-4-20250514",
"models": ["anthropic/claude-sonnet-4-20250514", "openai/gpt-4o"],
"route": "code_generation",
"trace_id": "c16d1096c1af4a17abb48fb182918a88"
}
--- 2. Complex reasoning query (OpenAI format) ---
{
"model": "openai/gpt-4o",
"models": ["openai/gpt-4o", "openai/gpt-4o-mini"],
"route": "complex_reasoning",
"trace_id": "30795e228aff4d7696f082ed01b75ad4"
}
--- 3. Simple query - no routing match (OpenAI format) ---
{
"model": "none",
"models": ["none"],
"route": null,
"trace_id": "ae0b6c3b220d499fb5298ac63f4eac0e"
}
--- 4. Code generation query (Anthropic format) ---
{
"model": "anthropic/claude-sonnet-4-20250514",
"models": ["anthropic/claude-sonnet-4-20250514", "openai/gpt-4o"],
"route": "code_generation",
"trace_id": "26be822bbdf14a3ba19fe198e55ea4a9"
}
--- 7. Session pinning - first call (fresh routing decision) ---
{
"model": "anthropic/claude-sonnet-4-20250514",
"models": ["anthropic/claude-sonnet-4-20250514", "openai/gpt-4o"],
"route": "code_generation",
"trace_id": "f1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6",
"session_id": "demo-session-001",
@ -277,7 +280,7 @@ kubectl rollout restart deployment/plano
--- 9. Different session gets its own fresh routing ---
{
"model": "openai/gpt-4o",
"models": ["openai/gpt-4o", "openai/gpt-4o-mini"],
"route": "complex_reasoning",
"trace_id": "1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d",
"session_id": "demo-session-002",

View file

@ -1,4 +1,4 @@
version: v0.3.0
version: v0.4.0
listeners:
- type: model
@ -6,22 +6,25 @@ listeners:
port: 12000
model_providers:
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
routing_preferences:
- name: complex_reasoning
description: complex reasoning tasks, multi-step analysis, or detailed explanations
- model: anthropic/claude-sonnet-4-20250514
access_key: $ANTHROPIC_API_KEY
routing_preferences:
- name: code_generation
description: generating new code, writing functions, or creating boilerplate
tracing:
random_sampling: 100
routing_preferences:
- name: complex_reasoning
description: complex reasoning tasks, multi-step analysis, or detailed explanations
models:
- openai/gpt-4o
- openai/gpt-4o-mini
- name: code_generation
description: generating new code, writing functions, or creating boilerplate
models:
- anthropic/claude-sonnet-4-20250514
- openai/gpt-4o

View file

@ -8,9 +8,12 @@ echo ""
echo "This demo shows how to use the /routing/v1/* endpoints to get"
echo "routing decisions without actually proxying the request to an LLM."
echo ""
echo "The response includes a ranked 'models' list — use models[0] as the"
echo "primary and fall back to models[1] on 429/5xx errors."
echo ""
# --- Example 1: OpenAI Chat Completions format ---
echo "--- 1. Code generation query (OpenAI format) ---"
# --- Example 1: Code generation (ranked by fastest) ---
echo "--- 1. Code generation query (prefer: fastest) ---"
echo ""
curl -s "$PLANO_URL/routing/v1/chat/completions" \
-H "Content-Type: application/json" \
@ -22,8 +25,8 @@ curl -s "$PLANO_URL/routing/v1/chat/completions" \
}' | python3 -m json.tool
echo ""
# --- Example 2: Complex reasoning query ---
echo "--- 2. Complex reasoning query (OpenAI format) ---"
# --- Example 2: Complex reasoning (ranked by cheapest) ---
echo "--- 2. Complex reasoning query (prefer: cheapest) ---"
echo ""
curl -s "$PLANO_URL/routing/v1/chat/completions" \
-H "Content-Type: application/json" \
@ -36,7 +39,7 @@ curl -s "$PLANO_URL/routing/v1/chat/completions" \
echo ""
# --- Example 3: Simple query (no routing match) ---
echo "--- 3. Simple query - no routing match (OpenAI format) ---"
echo "--- 3. Simple query - no routing match (falls back to request model) ---"
echo ""
curl -s "$PLANO_URL/routing/v1/chat/completions" \
-H "Content-Type: application/json" \
@ -62,8 +65,31 @@ curl -s "$PLANO_URL/routing/v1/messages" \
}' | python3 -m json.tool
echo ""
# --- Example 5: Inline routing policy in request body ---
echo "--- 5. Inline routing_policy (no config needed) ---"
# --- Example 5: Inline routing_preferences with prefer:cheapest ---
echo "--- 5. Inline routing_preferences (prefer: cheapest) ---"
echo " models[] will be sorted by ascending cost from DigitalOcean pricing"
echo ""
curl -s "$PLANO_URL/routing/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "Summarize the key differences between TCP and UDP"}
],
"routing_preferences": [
{
"name": "general",
"description": "general questions, explanations, and summaries",
"models": ["openai/gpt-4o", "openai/gpt-4o-mini"],
"selection_policy": {"prefer": "cheapest"}
}
]
}' | python3 -m json.tool
echo ""
# --- Example 6: Inline routing_preferences with prefer:fastest ---
echo "--- 6. Inline routing_preferences (prefer: fastest) ---"
echo " models[] will be sorted by ascending P95 latency from Prometheus"
echo ""
curl -s "$PLANO_URL/routing/v1/chat/completions" \
-H "Content-Type: application/json" \
@ -72,46 +98,12 @@ curl -s "$PLANO_URL/routing/v1/chat/completions" \
"messages": [
{"role": "user", "content": "Write a quicksort implementation in Go"}
],
"routing_policy": [
"routing_preferences": [
{
"model": "openai/gpt-4o",
"routing_preferences": [
{"name": "coding", "description": "code generation, writing functions, debugging"}
]
},
{
"model": "openai/gpt-4o-mini",
"routing_preferences": [
{"name": "general", "description": "general questions, simple lookups, casual conversation"}
]
}
]
}' | python3 -m json.tool
echo ""
# --- Example 6: Inline routing policy with Anthropic format ---
echo "--- 6. Inline routing_policy (Anthropic format) ---"
echo ""
curl -s "$PLANO_URL/routing/v1/messages" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "What is the weather like today?"}
],
"routing_policy": [
{
"model": "openai/gpt-4o",
"routing_preferences": [
{"name": "coding", "description": "code generation, writing functions, debugging"}
]
},
{
"model": "openai/gpt-4o-mini",
"routing_preferences": [
{"name": "general", "description": "general questions, simple lookups, casual conversation"}
]
"name": "coding",
"description": "code generation, writing functions, debugging",
"models": ["anthropic/claude-sonnet-4-20250514", "openai/gpt-4o", "openai/gpt-4o-mini"],
"selection_policy": {"prefer": "fastest"}
}
]
}' | python3 -m json.tool

View file

@ -0,0 +1,17 @@
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml:ro
depends_on:
- model-metrics
model-metrics:
image: python:3.11-slim
ports:
- "8080:8080"
volumes:
- ./metrics_server.py:/metrics_server.py:ro
command: python /metrics_server.py

View file

@ -0,0 +1,52 @@
"""
Demo metrics server.
Exposes two endpoints:
GET /metrics Prometheus text format, P95 latency per model (scraped by Prometheus)
GET /costs JSON cost data per model, compatible with cost_metrics source
"""
import json
from http.server import HTTPServer, BaseHTTPRequestHandler
PROMETHEUS_METRICS = """\
# HELP model_latency_p95_seconds P95 request latency in seconds per model
# TYPE model_latency_p95_seconds gauge
model_latency_p95_seconds{model_name="anthropic/claude-sonnet-4-20250514"} 0.85
model_latency_p95_seconds{model_name="openai/gpt-4o"} 1.20
model_latency_p95_seconds{model_name="openai/gpt-4o-mini"} 0.40
""".encode()
COST_DATA = {
"anthropic/claude-sonnet-4-20250514": {
"input_per_million": 3.0,
"output_per_million": 15.0,
},
"openai/gpt-4o": {"input_per_million": 5.0, "output_per_million": 20.0},
"openai/gpt-4o-mini": {"input_per_million": 0.15, "output_per_million": 0.6},
}
class MetricsHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == "/costs":
body = json.dumps(COST_DATA).encode()
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.end_headers()
self.wfile.write(body)
else:
# /metrics and everything else → Prometheus format
self.send_response(200)
self.send_header("Content-Type", "text/plain; version=0.0.4; charset=utf-8")
self.end_headers()
self.wfile.write(PROMETHEUS_METRICS)
def log_message(self, fmt, *args):
pass # suppress access logs
if __name__ == "__main__":
server = HTTPServer(("", 8080), MetricsHandler)
print("metrics server listening on :8080 (/metrics, /costs)", flush=True)
server.serve_forever()

View file

@ -0,0 +1,8 @@
global:
scrape_interval: 15s
scrape_configs:
- job_name: model_latency
static_configs:
- targets:
- model-metrics:8080