Models aktualisiert

Alpha Nerd 2026-04-13 16:46:35 +02:00
parent 8e900c36ff
commit 43c29f9f66

@ -11,17 +11,17 @@ All models are available via `api.nomyo.ai`. Pass the model ID string directly t
| `LiquidAI/LFM2.5-1.2B-Thinking` | 1.2B | Thinking || Reasoning model |
| `ibm-granite/granite-4.0-h-small` | 32B | General || IBM Granite 4.0, enterprise-focused |
| `Qwen/Qwen3.5-9B` | 9B | General || Balanced quality and speed |
| `utter-project/EuroLLM-9B-Instruct-2512` | 9B | General || Multilingual, strong European language support |
| `zai-org/GLM-4.7-Flash` | 30B (3B active) | General || Fast GLM variant |
| `utter-project/EuroLLM-9B-Instruct-2512` | 9B | General | 32k | Multilingual, strong European language support |
| `zai-org/GLM-4.7-Flash` | 30B (3B active) | General | 131k | Fast GLM variant |
| `mistralai/Ministral-3-14B-Instruct-2512-GGUF` | 14B | General || Mistral instruction-tuned |
| `ServiceNow-AI/Apriel-1.6-15b-Thinker` | 15B | Specialized || Reasoning model strong in math/physics/science |
| `openai/gpt-oss-20b` | 20B | General || OpenAI open-weight release |
| `openai/gpt-oss-20b` | 20B | General | 131k | OpenAI open-weight release |
| `LiquidAI/LFM2-24B-A2B` | 24B (2B active) | General || MoE — efficient inference |
| `Qwen/Qwen3.5-27B` | 27B | General || High quality, large context |
| `google/medgemma-27b-it` | 27B | Specialized || Medical domain, instruction-tuned |
| `nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4` | 30B (3B active) | General || MoE — efficient inference |
| `Qwen/Qwen3.5-35B-A3B` | 35B (3B active) | General || MoE — efficient inference |
| `moonshotai/Kimi-Linear-48B-A3B-Instruct` | 48B (3B active) | General || MoE — large capacity, efficient inference |
| `nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4` | 30B (3B active) | General | 200k | MoE — efficient inference |
| `Qwen/Qwen3.5-35B-A3B` | 35B (3B active) | General | 200k | MoE — efficient inference |
| `moonshotai/Kimi-Linear-48B-A3B-Instruct` | 48B (3B active) | General | 1m | MoE — large capacity, efficient inference |
> **MoE** (Mixture of Experts) models show total/active parameter counts.