mirror of
https://github.com/katanemo/plano.git
synced 2026-04-27 17:56:28 +02:00
use plano-orchestrator for LLM routing, remove arch-router
Replace RouterService/RouterModelV1 (arch-router prompt) with OrchestratorService/OrchestratorModelV1 (plano-orchestrator prompt) for LLM routing. This ensures the correct system prompt is used when llm_routing_model points at a Plano-Orchestrator model. - Extend OrchestratorService with session caching, ModelMetricsService, top-level routing preferences, and determine_route() for LLM routing - Delete RouterService, RouterModel trait, RouterModelV1, and ARCH_ROUTER_V1_SYSTEM_PROMPT - Unify defaults to Plano-Orchestrator / plano-orchestrator - Update CLI config generator, demos, docs, and config schema Made-with: Cursor
This commit is contained in:
parent
980faef6be
commit
af724fcc1e
27 changed files with 380 additions and 1412 deletions
|
|
@ -32,9 +32,9 @@ planoai up config.yaml
|
|||
|
||||
3. Test with curl or open AnythingLLM http://localhost:3001/
|
||||
|
||||
## Running with local Arch-Router (via Ollama)
|
||||
## Running with local routing model (via Ollama)
|
||||
|
||||
By default, Plano uses a hosted Arch-Router endpoint. To self-host Arch-Router locally using Ollama:
|
||||
By default, Plano uses a hosted Plano-Orchestrator endpoint. To self-host a routing model locally using Ollama:
|
||||
|
||||
1. Install [Ollama](https://ollama.ai) and pull the model:
|
||||
```bash
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue