Adding support for wildcard models in the model_providers config (#696)

* cleaning up plano cli commands

* adding support for wildcard model providers

* fixing compile errors

* fixing bugs related to default model provider, provider hint and duplicates in the model provider list

* fixed cargo fmt issues

* updating tests to always include the model id

* using default for the prompt_gateway path

* fixed the model name, as gpt-5-mini-2025-08-07 wasn't in the config

* making sure that all aliases and models match the config

* fixed the config generator to allow for base_url providers LLMs to include wildcard models

* re-ran the models list utility and added a shell script to run it

* updating docs to mention wildcard model providers

* updated provider_models.json to yaml, added that file to our docs for reference

* updating the build docs to use the new root-based build

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
This commit is contained in:
Salman Paracha 2026-01-28 17:47:33 -08:00 committed by GitHub
parent 8428b06e22
commit 2941392ed1
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
42 changed files with 1748 additions and 202 deletions

View file

@ -19,7 +19,7 @@ You can also pass in a header to override model when sending prompt. Following e
$ curl --header 'Content-Type: application/json' \
--header 'x-arch-llm-provider-hint: mistral/ministral-3b' \
--data '{"messages": [{"role": "user","content": "hello"}], "model": "none"}' \
--data '{"messages": [{"role": "user","content": "hello"}], "model": "gpt-4o"}' \
http://localhost:12000/v1/chat/completions 2> /dev/null | jq .
{
"id": "xxx",