* support configurable orchestrator model via orchestration config section * add self-hosting docs and demo for Plano-Orchestrator * list all Plano-Orchestrator model variants in docs * use overrides for custom routing and orchestration model * update docs * update orchestrator model name * rename arch provider to plano, use llm_routing_model and agent_orchestration_model * regenerate rendered config reference |
||
|---|---|---|
| .. | ||
| hurl_tests | ||
| config.yaml | ||
| docker-compose.yaml | ||
| plano_config_local.yaml | ||
| README.md | ||
| run_demo.sh | ||
| test_router_endpoint.rest | ||
Usage based LLM Routing
This demo shows how you can use user preferences to route user prompts to appropriate llm. See config.yaml for details on how you can define user preferences.
How to start the demo
Make sure you have Plano CLI installed (pip install planoai or uv tool install planoai).
cd demos/llm_routing/preference_based_routing
./run_demo.sh
To also start AnythingLLM (chat UI) and Jaeger (tracing):
./run_demo.sh --with-ui
Then open AnythingLLM at http://localhost:3001/
Or start manually:
- (Optional) Start AnythingLLM and Jaeger
docker compose up -d
- Start Plano
planoai up config.yaml
- Test with curl or open AnythingLLM http://localhost:3001/
Running with local Arch-Router (via Ollama)
By default, Plano uses a hosted Arch-Router endpoint. To self-host Arch-Router locally using Ollama:
- Install Ollama and pull the model:
ollama pull hf.co/katanemo/Arch-Router-1.5B.gguf:Q4_K_M
-
Make sure Ollama is running (
ollama serveor the macOS app). -
Start Plano with the local config:
planoai up plano_config_local.yaml
- Test routing:
curl -s "http://localhost:12000/routing/v1/messages" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Create a REST API endpoint in Rust using actix-web"}
]
}'
You should see the router select the appropriate model based on the routing preferences defined in plano_config_local.yaml.
Testing out preference based routing
We have defined two routes 1. code generation and 2. code understanding
For code generation query LLM that is better suited for code generation wil handle the request,
If you look at the logs you'd see that code generation llm was selected,
...
2025-05-31T01:02:19.382716Z INFO brightstaff::router::llm_router: router response: {'route': 'code_generation'}, response time: 203ms
...
Now if you ask for query related to code understanding you'd see llm that is better suited to handle code understanding in handled,
...
2025-05-31T01:06:33.555680Z INFO brightstaff::router::llm_router: router response: {'route': 'code_understanding'}, response time: 327ms
...