add instructions on hosting arch-router locally (#819)

This commit is contained in:
Adil Hafeez 2026-03-11 15:28:50 -07:00 committed by GitHub
parent b4313d93a4
commit 5400b0a2fa
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 154 additions and 0 deletions

View file

@ -32,6 +32,37 @@ planoai up config.yaml
3. Test with curl or open AnythingLLM http://localhost:3001/
## Running with local Arch-Router (via Ollama)
By default, Plano uses a hosted Arch-Router endpoint. To self-host Arch-Router locally using Ollama:
1. Install [Ollama](https://ollama.ai) and pull the model:
```bash
ollama pull hf.co/katanemo/Arch-Router-1.5B.gguf:Q4_K_M
```
2. Make sure Ollama is running (`ollama serve` or the macOS app).
3. Start Plano with the local config:
```bash
planoai up plano_config_local.yaml
```
4. Test routing:
```bash
curl -s "http://localhost:12000/routing/v1/messages" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Create a REST API endpoint in Rust using actix-web"}
]
}'
```
You should see the router select the appropriate model based on the routing preferences defined in `plano_config_local.yaml`.
# Testing out preference based routing
We have defined two routes 1. code generation and 2. code understanding