mirror of
https://github.com/katanemo/plano.git
synced 2026-05-05 05:42:49 +02:00
deploy: 2a36dd7376
This commit is contained in:
parent
9c0ad075ca
commit
58a833cfd4
6 changed files with 548 additions and 29 deletions
|
|
@ -1,6 +1,6 @@
|
|||
Plano Docs v0.4.3
|
||||
llms.txt (auto-generated)
|
||||
Generated (UTC): 2026-01-29T01:18:45.966758+00:00
|
||||
Generated (UTC): 2026-01-29T02:56:05.745300+00:00
|
||||
|
||||
Table of contents
|
||||
- Agents (concepts/agents)
|
||||
|
|
@ -831,6 +831,8 @@ First-Class Providers: Native integrations with OpenAI, Anthropic, DeepSeek, Mis
|
|||
|
||||
OpenAI-Compatible Providers: Any provider implementing the OpenAI Chat Completions API standard
|
||||
|
||||
Wildcard Model Configuration: Automatically configure all models from a provider using provider/* syntax
|
||||
|
||||
Intelligent Routing
|
||||
Three powerful routing approaches to optimize model selection:
|
||||
|
||||
|
|
@ -1162,7 +1164,7 @@ llm_providers:
|
|||
|
||||
Common Configuration Fields:
|
||||
|
||||
model: Provider prefix and model name (format: provider/model-name)
|
||||
model: Provider prefix and model name (format: provider/model-name or provider/* for wildcard expansion)
|
||||
|
||||
access_key: API key for authentication (supports environment variables)
|
||||
|
||||
|
|
@ -1277,7 +1279,11 @@ Advanced reasoning model (preview)
|
|||
Configuration Examples:
|
||||
|
||||
llm_providers:
|
||||
# Latest models (examples - use any OpenAI chat model)
|
||||
# Configure all OpenAI models with wildcard
|
||||
- model: openai/*
|
||||
access_key: $OPENAI_API_KEY
|
||||
|
||||
# Or configure specific models
|
||||
- model: openai/gpt-5.2
|
||||
access_key: $OPENAI_API_KEY
|
||||
default: true
|
||||
|
|
@ -1285,7 +1291,6 @@ llm_providers:
|
|||
- model: openai/gpt-5
|
||||
access_key: $OPENAI_API_KEY
|
||||
|
||||
# Use any model name from OpenAI's API
|
||||
- model: openai/gpt-4o
|
||||
access_key: $OPENAI_API_KEY
|
||||
|
||||
|
|
@ -1338,17 +1343,29 @@ Complex agents and coding
|
|||
Configuration Examples:
|
||||
|
||||
llm_providers:
|
||||
# Latest models (examples - use any Anthropic chat model)
|
||||
# Configure all Anthropic models with wildcard
|
||||
- model: anthropic/*
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
|
||||
# Or configure specific models
|
||||
- model: anthropic/claude-opus-4-5
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
|
||||
- model: anthropic/claude-sonnet-4-5
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
|
||||
# Use any model name from Anthropic's API
|
||||
- model: anthropic/claude-haiku-4-5
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
|
||||
# Override specific model with custom routing
|
||||
- model: anthropic/*
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
|
||||
- model: anthropic/claude-sonnet-4-20250514
|
||||
access_key: $ANTHROPIC_PROD_API_KEY
|
||||
routing_preferences:
|
||||
- name: code_generation
|
||||
|
||||
DeepSeek
|
||||
|
||||
Provider Prefix: deepseek/
|
||||
|
|
@ -1928,6 +1945,95 @@ llm_providers:
|
|||
access_key: $OPENAI_DEV_KEY
|
||||
name: openai-dev
|
||||
|
||||
Wildcard Model Configuration
|
||||
|
||||
Automatically configure all available models from a provider using wildcard patterns. Plano expands wildcards at configuration load time to include all known models from the provider’s registry.
|
||||
|
||||
Basic Wildcard Usage:
|
||||
|
||||
llm_providers:
|
||||
# Expand to all OpenAI models
|
||||
- model: openai/*
|
||||
access_key: $OPENAI_API_KEY
|
||||
|
||||
# Expand to all Anthropic Claude models
|
||||
- model: anthropic/*
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
|
||||
# Expand to all Mistral models
|
||||
- model: mistral/*
|
||||
access_key: $MISTRAL_API_KEY
|
||||
|
||||
How Wildcards Work:
|
||||
|
||||
Known Providers (OpenAI, Anthropic, DeepSeek, Mistral, Groq, Gemini, Together AI, xAI, Moonshot, Zhipu):
|
||||
|
||||
Expands at config load time to all models in Plano’s provider registry
|
||||
|
||||
Creates entries for both canonical (openai/gpt-4) and short names (gpt-4)
|
||||
|
||||
Enables the /models/list endpoint to list all available models
|
||||
|
||||
View complete model list: provider_models.yaml
|
||||
|
||||
Unknown/Custom Providers (e.g., custom-provider/*):
|
||||
|
||||
Stores as a wildcard pattern for runtime matching
|
||||
|
||||
Requires base_url and provider_interface configuration
|
||||
|
||||
Matches model requests dynamically (e.g., custom-provider/any-model-name)
|
||||
|
||||
Does not appear in /models/list endpoint
|
||||
|
||||
Overriding Wildcard Models:
|
||||
|
||||
You can configure specific models with custom settings even when using wildcards. Specific configurations take precedence and are excluded from wildcard expansion:
|
||||
|
||||
llm_providers:
|
||||
# Expand to all Anthropic models
|
||||
- model: anthropic/*
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
|
||||
# Override specific model with custom settings
|
||||
# This model will NOT be included in the wildcard expansion above
|
||||
- model: anthropic/claude-sonnet-4-20250514
|
||||
access_key: $ANTHROPIC_PROD_API_KEY
|
||||
routing_preferences:
|
||||
- name: code_generation
|
||||
priority: 1
|
||||
|
||||
# Another specific override
|
||||
- model: anthropic/claude-3-haiku-20240307
|
||||
access_key: $ANTHROPIC_DEV_API_KEY
|
||||
|
||||
Custom Provider Wildcards:
|
||||
|
||||
For providers not in Plano’s registry, wildcards enable dynamic model routing:
|
||||
|
||||
llm_providers:
|
||||
# Custom LiteLLM deployment
|
||||
- model: litellm/*
|
||||
base_url: https://litellm.example.com
|
||||
provider_interface: openai
|
||||
passthrough_auth: true
|
||||
|
||||
# Custom provider with all models
|
||||
- model: custom-provider/*
|
||||
access_key: $CUSTOM_API_KEY
|
||||
base_url: https://api.custom-provider.com
|
||||
provider_interface: openai
|
||||
|
||||
Benefits:
|
||||
|
||||
Simplified Configuration: One line instead of listing dozens of models
|
||||
|
||||
Future-Proof: Automatically includes new models as they’re released
|
||||
|
||||
Flexible Overrides: Customize specific models while using wildcards for others
|
||||
|
||||
Selective Expansion: Control which models get custom configurations
|
||||
|
||||
Default Model Configuration
|
||||
|
||||
Mark one model as the default for fallback scenarios:
|
||||
|
|
@ -3038,7 +3144,7 @@ Step 3: Interact with LLM
|
|||
Step 3.1: Using curl command
|
||||
|
||||
$ curl --header 'Content-Type: application/json' \
|
||||
--data '{"messages": [{"role": "user","content": "What is the capital of France?"}], "model": "none"}' \
|
||||
--data '{"messages": [{"role": "user","content": "What is the capital of France?"}], "model": "gpt-4o"}' \
|
||||
http://localhost:12000/v1/chat/completions
|
||||
|
||||
{
|
||||
|
|
@ -3225,7 +3331,7 @@ Step 3. Interacting with gateway using curl command
|
|||
Here is a sample curl command you can use to interact:
|
||||
|
||||
$ curl --header 'Content-Type: application/json' \
|
||||
--data '{"messages": [{"role": "user","content": "what is exchange rate for gbp"}], "model": "none"}' \
|
||||
--data '{"messages": [{"role": "user","content": "what is exchange rate for gbp"}], "model": "gpt-4o"}' \
|
||||
http://localhost:10000/v1/chat/completions | jq ".choices[0].message.content"
|
||||
|
||||
"As of the date provided in your context, December 5, 2024, the exchange rate for GBP (British Pound) from USD (United States Dollar) is 0.78558. This means that 1 USD is equivalent to 0.78558 GBP."
|
||||
|
|
@ -3233,7 +3339,7 @@ $ curl --header 'Content-Type: application/json' \
|
|||
And to get the list of supported currencies:
|
||||
|
||||
$ curl --header 'Content-Type: application/json' \
|
||||
--data '{"messages": [{"role": "user","content": "show me list of currencies that are supported for conversion"}], "model": "none"}' \
|
||||
--data '{"messages": [{"role": "user","content": "show me list of currencies that are supported for conversion"}], "model": "gpt-4o"}' \
|
||||
http://localhost:10000/v1/chat/completions | jq ".choices[0].message.content"
|
||||
|
||||
"Here is a list of the currencies that are supported for conversion from USD, along with their symbols:\n\n1. AUD - Australian Dollar\n2. BGN - Bulgarian Lev\n3. BRL - Brazilian Real\n4. CAD - Canadian Dollar\n5. CHF - Swiss Franc\n6. CNY - Chinese Renminbi Yuan\n7. CZK - Czech Koruna\n8. DKK - Danish Krone\n9. EUR - Euro\n10. GBP - British Pound\n11. HKD - Hong Kong Dollar\n12. HUF - Hungarian Forint\n13. IDR - Indonesian Rupiah\n14. ILS - Israeli New Sheqel\n15. INR - Indian Rupee\n16. ISK - Icelandic Króna\n17. JPY - Japanese Yen\n18. KRW - South Korean Won\n19. MXN - Mexican Peso\n20. MYR - Malaysian Ringgit\n21. NOK - Norwegian Krone\n22. NZD - New Zealand Dollar\n23. PHP - Philippine Peso\n24. PLN - Polish Złoty\n25. RON - Romanian Leu\n26. SEK - Swedish Krona\n27. SGD - Singapore Dollar\n28. THB - Thai Baht\n29. TRY - Turkish Lira\n30. USD - United States Dollar\n31. ZAR - South African Rand\n\nIf you want to convert USD to any of these currencies, you can select the one you are interested in."
|
||||
|
|
|
|||
315
includes/provider_models.yaml
Executable file
315
includes/provider_models.yaml
Executable file
|
|
@ -0,0 +1,315 @@
|
|||
version: '1.0'
|
||||
source: canonical-apis
|
||||
providers:
|
||||
qwen:
|
||||
- qwen/qwen3-max-2026-01-23
|
||||
- qwen/qwen-plus-character
|
||||
- qwen/qwen-flash-character
|
||||
- qwen/qwen-flash
|
||||
- qwen/qwen3-vl-plus-2025-12-19
|
||||
- qwen/qwen3-omni-flash-2025-12-01
|
||||
- qwen/qwen3-livetranslate-flash-2025-12-01
|
||||
- qwen/qwen3-livetranslate-flash
|
||||
- qwen/qwen-mt-lite
|
||||
- qwen/qwen-plus-2025-12-01
|
||||
- qwen/qwen-mt-flash
|
||||
- qwen/ccai-pro
|
||||
- qwen/tongyi-tingwu-slp
|
||||
- qwen/qwen3-vl-flash
|
||||
- qwen/qwen3-vl-flash-2025-10-15
|
||||
- qwen/qwen3-omni-flash
|
||||
- qwen/qwen3-omni-flash-2025-09-15
|
||||
- qwen/qwen3-omni-30b-a3b-captioner
|
||||
- qwen/qwen2.5-7b-instruct
|
||||
- qwen/qwen2.5-14b-instruct
|
||||
- qwen/qwen2.5-32b-instruct
|
||||
- qwen/qwen2.5-72b-instruct
|
||||
- qwen/qwen2.5-14b-instruct-1m
|
||||
- qwen/qwen2.5-7b-instruct-1m
|
||||
- qwen/qwen-max-2025-01-25
|
||||
- qwen/qwen-max-latest
|
||||
- qwen/qwen-turbo-2024-11-01
|
||||
- qwen/qwen-turbo-latest
|
||||
- qwen/qwen-plus-latest
|
||||
- qwen/qwen-plus-2025-01-25
|
||||
- qwen/qwq-plus-2025-03-05
|
||||
- qwen/qwen-mt-turbo
|
||||
- qwen/qwen-mt-plus
|
||||
- qwen/qwen-coder-plus
|
||||
- qwen/qwq-plus
|
||||
- qwen/qwen2.5-vl-32b-instruct
|
||||
- qwen/qvq-max
|
||||
- qwen/qwen-omni-turbo
|
||||
- qwen/qwen3-8b
|
||||
- qwen/qwen3-30b-a3b
|
||||
- qwen/qwen3-235b-a22b
|
||||
- qwen/qwen-turbo-2025-04-28
|
||||
- qwen/qwen-plus-2025-04-28
|
||||
- qwen/qwen-vl-max-2025-04-08
|
||||
- qwen/qwen-vl-plus-2025-01-25
|
||||
- qwen/qwen-vl-plus-latest
|
||||
- qwen/qwen-vl-max-latest
|
||||
- qwen/qwen-vl-plus-2025-05-07
|
||||
- qwen/qwen3-coder-plus
|
||||
- qwen/qwen3-coder-480b-a35b-instruct
|
||||
- qwen/qwen3-235b-a22b-instruct-2507
|
||||
- qwen/qwen-plus-2025-07-14
|
||||
- qwen/qwen3-coder-plus-2025-07-22
|
||||
- qwen/qwen3-235b-a22b-thinking-2507
|
||||
- qwen/qwen3-coder-flash
|
||||
- qwen/qwen-vl-max
|
||||
- qwen/qwen-vl-max-2025-08-13
|
||||
- qwen/qwen3-max
|
||||
- qwen/qwen3-max-2025-09-23
|
||||
- qwen/qwen3-vl-plus
|
||||
- qwen/qwen3-vl-235b-a22b-instruct
|
||||
- qwen/qwen3-vl-235b-a22b-thinking
|
||||
- qwen/qwen3-30b-a3b-thinking-2507
|
||||
- qwen/qwen3-30b-a3b-instruct-2507
|
||||
- qwen/qwen3-14b
|
||||
- qwen/qwen3-32b
|
||||
- qwen/qwen3-0.6b
|
||||
- qwen/qwen3-4b
|
||||
- qwen/qwen3-1.7b
|
||||
- qwen/qwen-vl-plus
|
||||
- qwen/qwen3-coder-plus-2025-09-23
|
||||
- qwen/qwen3-vl-plus-2025-09-23
|
||||
- qwen/qwen-plus-2025-09-11
|
||||
- qwen/qwen3-next-80b-a3b-thinking
|
||||
- qwen/qwen3-next-80b-a3b-instruct
|
||||
- qwen/qwen3-max-preview
|
||||
- qwen/qwen2-7b-instruct
|
||||
- qwen/qwen-max
|
||||
- qwen/qwen-plus
|
||||
- qwen/qwen-turbo
|
||||
openai:
|
||||
- openai/gpt-4-0613
|
||||
- openai/gpt-4
|
||||
- openai/gpt-3.5-turbo
|
||||
- openai/gpt-5.2-codex
|
||||
- openai/gpt-3.5-turbo-instruct
|
||||
- openai/gpt-3.5-turbo-instruct-0914
|
||||
- openai/gpt-4-1106-preview
|
||||
- openai/gpt-3.5-turbo-1106
|
||||
- openai/gpt-4-0125-preview
|
||||
- openai/gpt-4-turbo-preview
|
||||
- openai/gpt-3.5-turbo-0125
|
||||
- openai/gpt-4-turbo
|
||||
- openai/gpt-4-turbo-2024-04-09
|
||||
- openai/gpt-4o
|
||||
- openai/gpt-4o-2024-05-13
|
||||
- openai/gpt-4o-mini-2024-07-18
|
||||
- openai/gpt-4o-mini
|
||||
- openai/gpt-4o-2024-08-06
|
||||
- openai/chatgpt-4o-latest
|
||||
- openai/o1-2024-12-17
|
||||
- openai/o1
|
||||
- openai/computer-use-preview
|
||||
- openai/o3-mini
|
||||
- openai/o3-mini-2025-01-31
|
||||
- openai/gpt-4o-2024-11-20
|
||||
- openai/computer-use-preview-2025-03-11
|
||||
- openai/gpt-4o-search-preview-2025-03-11
|
||||
- openai/gpt-4o-search-preview
|
||||
- openai/gpt-4o-mini-search-preview-2025-03-11
|
||||
- openai/gpt-4o-mini-search-preview
|
||||
- openai/o1-pro-2025-03-19
|
||||
- openai/o1-pro
|
||||
- openai/o3-2025-04-16
|
||||
- openai/o4-mini-2025-04-16
|
||||
- openai/o3
|
||||
- openai/o4-mini
|
||||
- openai/gpt-4.1-2025-04-14
|
||||
- openai/gpt-4.1
|
||||
- openai/gpt-4.1-mini-2025-04-14
|
||||
- openai/gpt-4.1-mini
|
||||
- openai/gpt-4.1-nano-2025-04-14
|
||||
- openai/gpt-4.1-nano
|
||||
- openai/codex-mini-latest
|
||||
- openai/o3-pro
|
||||
- openai/o3-pro-2025-06-10
|
||||
- openai/o4-mini-deep-research
|
||||
- openai/o3-deep-research
|
||||
- openai/o3-deep-research-2025-06-26
|
||||
- openai/o4-mini-deep-research-2025-06-26
|
||||
- openai/gpt-5-chat-latest
|
||||
- openai/gpt-5-2025-08-07
|
||||
- openai/gpt-5
|
||||
- openai/gpt-5-mini-2025-08-07
|
||||
- openai/gpt-5-mini
|
||||
- openai/gpt-5-nano-2025-08-07
|
||||
- openai/gpt-5-nano
|
||||
- openai/gpt-5-codex
|
||||
- openai/gpt-5-pro-2025-10-06
|
||||
- openai/gpt-5-pro
|
||||
- openai/gpt-5-search-api
|
||||
- openai/gpt-5-search-api-2025-10-14
|
||||
- openai/gpt-5.1-chat-latest
|
||||
- openai/gpt-5.1-2025-11-13
|
||||
- openai/gpt-5.1
|
||||
- openai/gpt-5.1-codex
|
||||
- openai/gpt-5.1-codex-mini
|
||||
- openai/gpt-5.1-codex-max
|
||||
- openai/gpt-5.2-2025-12-11
|
||||
- openai/gpt-5.2
|
||||
- openai/gpt-5.2-pro-2025-12-11
|
||||
- openai/gpt-5.2-pro
|
||||
- openai/gpt-5.2-chat-latest
|
||||
- openai/gpt-3.5-turbo-16k
|
||||
- openai/ft:gpt-3.5-turbo-0613:katanemo::8CMZbm0P
|
||||
google:
|
||||
- google/gemini-2.5-flash
|
||||
- google/gemini-2.5-pro
|
||||
- google/gemini-2.0-flash-exp
|
||||
- google/gemini-2.0-flash
|
||||
- google/gemini-2.0-flash-001
|
||||
- google/gemini-2.0-flash-exp-image-generation
|
||||
- google/gemini-2.0-flash-lite-001
|
||||
- google/gemini-2.0-flash-lite
|
||||
- google/gemini-2.0-flash-lite-preview-02-05
|
||||
- google/gemini-2.0-flash-lite-preview
|
||||
- google/gemini-exp-1206
|
||||
- google/gemini-2.5-flash-preview-tts
|
||||
- google/gemini-2.5-pro-preview-tts
|
||||
- google/gemma-3-1b-it
|
||||
- google/gemma-3-4b-it
|
||||
- google/gemma-3-12b-it
|
||||
- google/gemma-3-27b-it
|
||||
- google/gemma-3n-e4b-it
|
||||
- google/gemma-3n-e2b-it
|
||||
- google/gemini-flash-latest
|
||||
- google/gemini-flash-lite-latest
|
||||
- google/gemini-pro-latest
|
||||
- google/gemini-2.5-flash-lite
|
||||
- google/gemini-2.5-flash-image
|
||||
- google/gemini-2.5-flash-preview-09-2025
|
||||
- google/gemini-2.5-flash-lite-preview-09-2025
|
||||
- google/gemini-3-pro-preview
|
||||
- google/gemini-3-flash-preview
|
||||
- google/gemini-3-pro-image-preview
|
||||
- google/nano-banana-pro-preview
|
||||
- google/gemini-robotics-er-1.5-preview
|
||||
- google/gemini-2.5-computer-use-preview-10-2025
|
||||
- google/deep-research-pro-preview-12-2025
|
||||
mistralai:
|
||||
- mistralai/mistral-medium-2505
|
||||
- mistralai/mistral-medium-2508
|
||||
- mistralai/mistral-medium-latest
|
||||
- mistralai/mistral-medium
|
||||
- mistralai/open-mistral-nemo
|
||||
- mistralai/open-mistral-nemo-2407
|
||||
- mistralai/mistral-tiny-2407
|
||||
- mistralai/mistral-tiny-latest
|
||||
- mistralai/mistral-large-2411
|
||||
- mistralai/pixtral-large-2411
|
||||
- mistralai/pixtral-large-latest
|
||||
- mistralai/mistral-large-pixtral-2411
|
||||
- mistralai/codestral-2508
|
||||
- mistralai/codestral-latest
|
||||
- mistralai/devstral-small-2507
|
||||
- mistralai/devstral-medium-2507
|
||||
- mistralai/devstral-2512
|
||||
- mistralai/mistral-vibe-cli-latest
|
||||
- mistralai/devstral-medium-latest
|
||||
- mistralai/devstral-latest
|
||||
- mistralai/labs-devstral-small-2512
|
||||
- mistralai/devstral-small-latest
|
||||
- mistralai/mistral-small-2506
|
||||
- mistralai/mistral-small-latest
|
||||
- mistralai/labs-mistral-small-creative
|
||||
- mistralai/magistral-medium-2509
|
||||
- mistralai/magistral-medium-latest
|
||||
- mistralai/magistral-small-2509
|
||||
- mistralai/magistral-small-latest
|
||||
- mistralai/mistral-large-2512
|
||||
- mistralai/mistral-large-latest
|
||||
- mistralai/ministral-3b-2512
|
||||
- mistralai/ministral-3b-latest
|
||||
- mistralai/ministral-8b-2512
|
||||
- mistralai/ministral-8b-latest
|
||||
- mistralai/ministral-14b-2512
|
||||
- mistralai/ministral-14b-latest
|
||||
- mistralai/open-mistral-7b
|
||||
- mistralai/mistral-tiny
|
||||
- mistralai/mistral-tiny-2312
|
||||
- mistralai/pixtral-12b-2409
|
||||
- mistralai/pixtral-12b
|
||||
- mistralai/pixtral-12b-latest
|
||||
- mistralai/ministral-3b-2410
|
||||
- mistralai/ministral-8b-2410
|
||||
- mistralai/codestral-2501
|
||||
- mistralai/codestral-2412
|
||||
- mistralai/codestral-2411-rc5
|
||||
- mistralai/mistral-small-2501
|
||||
- mistralai/mistral-embed-2312
|
||||
- mistralai/mistral-embed
|
||||
- mistralai/codestral-embed
|
||||
- mistralai/codestral-embed-2505
|
||||
z-ai:
|
||||
- z-ai/glm-4.5
|
||||
- z-ai/glm-4.5-air
|
||||
- z-ai/glm-4.6
|
||||
- z-ai/glm-4.7
|
||||
amazon:
|
||||
- amazon/amazon.nova-pro-v1:0
|
||||
- amazon/amazon.nova-2-lite-v1:0
|
||||
- amazon/amazon.nova-2-sonic-v1:0
|
||||
- amazon/amazon.titan-tg1-large
|
||||
- amazon/amazon.nova-premier-v1:0:8k
|
||||
- amazon/amazon.nova-premier-v1:0:20k
|
||||
- amazon/amazon.nova-premier-v1:0:1000k
|
||||
- amazon/amazon.nova-premier-v1:0:mm
|
||||
- amazon/amazon.nova-premier-v1:0
|
||||
- amazon/amazon.nova-lite-v1:0
|
||||
- amazon/amazon.nova-micro-v1:0
|
||||
deepseek:
|
||||
- deepseek/deepseek-chat
|
||||
- deepseek/deepseek-reasoner
|
||||
x-ai:
|
||||
- x-ai/grok-2-vision-1212
|
||||
- x-ai/grok-3
|
||||
- x-ai/grok-3-mini
|
||||
- x-ai/grok-4-0709
|
||||
- x-ai/grok-4-1-fast-non-reasoning
|
||||
- x-ai/grok-4-1-fast-reasoning
|
||||
- x-ai/grok-4-fast-non-reasoning
|
||||
- x-ai/grok-4-fast-reasoning
|
||||
- x-ai/grok-code-fast-1
|
||||
moonshotai:
|
||||
- moonshotai/kimi-latest
|
||||
- moonshotai/kimi-k2.5
|
||||
- moonshotai/moonshot-v1-8k-vision-preview
|
||||
- moonshotai/kimi-k2-thinking
|
||||
- moonshotai/moonshot-v1-auto
|
||||
- moonshotai/kimi-k2-0711-preview
|
||||
- moonshotai/moonshot-v1-32k
|
||||
- moonshotai/kimi-k2-thinking-turbo
|
||||
- moonshotai/kimi-k2-0905-preview
|
||||
- moonshotai/moonshot-v1-128k
|
||||
- moonshotai/moonshot-v1-32k-vision-preview
|
||||
- moonshotai/moonshot-v1-128k-vision-preview
|
||||
- moonshotai/kimi-k2-turbo-preview
|
||||
- moonshotai/moonshot-v1-8k
|
||||
anthropic:
|
||||
- anthropic/claude-opus-4-5-20251101
|
||||
- anthropic/claude-opus-4-5
|
||||
- anthropic/claude-haiku-4-5-20251001
|
||||
- anthropic/claude-haiku-4-5
|
||||
- anthropic/claude-sonnet-4-5-20250929
|
||||
- anthropic/claude-sonnet-4-5
|
||||
- anthropic/claude-opus-4-1-20250805
|
||||
- anthropic/claude-opus-4-1
|
||||
- anthropic/claude-opus-4-20250514
|
||||
- anthropic/claude-opus-4
|
||||
- anthropic/claude-sonnet-4-20250514
|
||||
- anthropic/claude-sonnet-4
|
||||
- anthropic/claude-3-7-sonnet-20250219
|
||||
- anthropic/claude-3-7-sonnet
|
||||
- anthropic/claude-3-5-haiku-20241022
|
||||
- anthropic/claude-3-5-haiku
|
||||
- anthropic/claude-3-haiku-20240307
|
||||
- anthropic/claude-3-haiku
|
||||
metadata:
|
||||
total_providers: 10
|
||||
total_models: 298
|
||||
last_updated: 2026-01-27T22:40:53.653700+00:00
|
||||
Loading…
Add table
Add a link
Reference in a new issue