Adding support for wildcard models in the model_providers config (#696)

* cleaning up plano cli commands

* adding support for wildcard model providers

* fixing compile errors

* fixing bugs related to default model provider, provider hint and duplicates in the model provider list

* fixed cargo fmt issues

* updating tests to always include the model id

* using default for the prompt_gateway path

* fixed the model name, as gpt-5-mini-2025-08-07 wasn't in the config

* making sure that all aliases and models match the config

* fixed the config generator to allow for base_url providers LLMs to include wildcard models

* re-ran the models list utility and added a shell script to run it

* updating docs to mention wildcard model providers

* updated provider_models.json to yaml, added that file to our docs for reference

* updating the build docs to use the new root-based build

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
This commit is contained in:
Salman Paracha 2026-01-28 17:47:33 -08:00 committed by GitHub
parent 8428b06e22
commit 2941392ed1
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
42 changed files with 1748 additions and 202 deletions

View file

@ -109,7 +109,7 @@ def test_openai_responses_api_non_streaming_with_tools_passthrough():
]
resp = client.responses.create(
model="gpt-5",
model="openai/gpt-5-mini-2025-08-07",
input="Call the echo tool",
tools=tools,
)
@ -140,7 +140,7 @@ def test_openai_responses_api_with_streaming_with_tools_passthrough():
]
stream = client.responses.create(
model="gpt-5",
model="openai/gpt-5-mini-2025-08-07",
input="Call the echo tool",
tools=tools,
stream=True,
@ -638,7 +638,7 @@ def test_openai_responses_api_mixed_content_types():
# This test mimics the request that was failing:
# One message with string content, another with array content
resp = client.responses.create(
model="arch.title.v1",
model="openai/gpt-5-mini-2025-08-07",
input=[
{
"role": "developer",