Models aktualisiert

Alpha Nerd 2026-04-13 16:05:10 +02:00
parent 99ed7696a6
commit 8e900c36ff

@ -9,10 +9,10 @@ All models are available via `api.nomyo.ai`. Pass the model ID string directly t
| `Qwen/Qwen3-0.6B` | 0.6B | General || Lightweight, fast inference | | `Qwen/Qwen3-0.6B` | 0.6B | General || Lightweight, fast inference |
| `Qwen/Qwen3.5-0.8B` | 0.8B | General || Lightweight, fast inference | | `Qwen/Qwen3.5-0.8B` | 0.8B | General || Lightweight, fast inference |
| `LiquidAI/LFM2.5-1.2B-Thinking` | 1.2B | Thinking || Reasoning model | | `LiquidAI/LFM2.5-1.2B-Thinking` | 1.2B | Thinking || Reasoning model |
| `ibm-granite/granite-4.0-h-small` | Small | General || IBM Granite 4.0, enterprise-focused | | `ibm-granite/granite-4.0-h-small` | 32B | General || IBM Granite 4.0, enterprise-focused |
| `Qwen/Qwen3.5-9B` | 9B | General || Balanced quality and speed | | `Qwen/Qwen3.5-9B` | 9B | General || Balanced quality and speed |
| `utter-project/EuroLLM-9B-Instruct-2512` | 9B | General || Multilingual, strong European language support | | `utter-project/EuroLLM-9B-Instruct-2512` | 9B | General || Multilingual, strong European language support |
| `zai-org/GLM-4.7-Flash` | | General || Fast GLM variant | | `zai-org/GLM-4.7-Flash` | 30B (3B active) | General || Fast GLM variant |
| `mistralai/Ministral-3-14B-Instruct-2512-GGUF` | 14B | General || Mistral instruction-tuned | | `mistralai/Ministral-3-14B-Instruct-2512-GGUF` | 14B | General || Mistral instruction-tuned |
| `ServiceNow-AI/Apriel-1.6-15b-Thinker` | 15B | Specialized || Reasoning model strong in math/physics/science | | `ServiceNow-AI/Apriel-1.6-15b-Thinker` | 15B | Specialized || Reasoning model strong in math/physics/science |
| `openai/gpt-oss-20b` | 20B | General || OpenAI open-weight release | | `openai/gpt-oss-20b` | 20B | General || OpenAI open-weight release |
@ -23,7 +23,7 @@ All models are available via `api.nomyo.ai`. Pass the model ID string directly t
| `Qwen/Qwen3.5-35B-A3B` | 35B (3B active) | General || MoE — efficient inference | | `Qwen/Qwen3.5-35B-A3B` | 35B (3B active) | General || MoE — efficient inference |
| `moonshotai/Kimi-Linear-48B-A3B-Instruct` | 48B (3B active) | General || MoE — large capacity, efficient inference | | `moonshotai/Kimi-Linear-48B-A3B-Instruct` | 48B (3B active) | General || MoE — large capacity, efficient inference |
> **MoE** (Mixture of Experts) models show total/active parameter counts. Only active parameters are used per token, keeping inference cost low relative to total model size. > **MoE** (Mixture of Experts) models show total/active parameter counts.
## Usage Example ## Usage Example