doc: added api-rate limits and models

This commit is contained in:
Alpha Nerd 2026-04-13 14:59:33 +02:00
parent 85426b365d
commit 19d6d1f32c
Signed by: alpha-nerd
SSH key fingerprint: SHA256:QkkAgVoYi9TQ0UKPkiKSfnerZy2h4qhi3SVPXJmBN+M
3 changed files with 120 additions and 3 deletions

48
doc/models.md Normal file
View file

@ -0,0 +1,48 @@
# Available Models
All models are available via `api.nomyo.ai`. Pass the model ID string directly to the `model` parameter of `create()`.
## Model List
| Model ID | Parameters | Type | Notes |
|---|---|---|---|
| `Qwen/Qwen3-0.6B` | 0.6B | General | Lightweight, fast inference |
| `Qwen/Qwen3.5-0.8B` | 0.8B | General | Lightweight, fast inference |
| `LiquidAI/LFM2.5-1.2B-Thinking` | 1.2B | Thinking | Reasoning model |
| `ibm-granite/granite-4.0-h-small` | Small | General | IBM Granite 4.0, enterprise-focused |
| `Qwen/Qwen3.5-9B` | 9B | General | Balanced quality and speed |
| `utter-project/EuroLLM-9B-Instruct-2512` | 9B | General | Multilingual, strong European language support |
| `zai-org/GLM-4.7-Flash` | — | General | Fast GLM variant |
| `mistralai/Ministral-3-14B-Instruct-2512-GGUF` | 14B | General | Mistral instruction-tuned |
| `ServiceNow-AI/Apriel-1.6-15b-Thinker` | 15B | Thinking | Reasoning model |
| `openai/gpt-oss-20b` | 20B | General | OpenAI open-weight release |
| `LiquidAI/LFM2-24B-A2B` | 24B (2B active) | General | MoE — efficient inference |
| `Qwen/Qwen3.5-27B` | 27B | General | High quality, large context |
| `google/medgemma-27b-it` | 27B | Specialized | Medical domain, instruction-tuned |
| `nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4` | 30B (3B active) | General | MoE — efficient inference |
| `Qwen/Qwen3.5-35B-A3B` | 35B (3B active) | General | MoE — efficient inference |
| `moonshotai/Kimi-Linear-48B-A3B-Instruct` | 48B (3B active) | General | MoE — large capacity, efficient inference |
> **MoE** (Mixture of Experts) models show total/active parameter counts. Only active parameters are used per token, keeping inference cost low relative to total model size.
## Usage Example
```python
from nomyo import SecureChatCompletion
client = SecureChatCompletion(api_key="your-api-key")
response = await client.create(
model="Qwen/Qwen3.5-9B",
messages=[{"role": "user", "content": "Hello!"}]
)
```
## Choosing a Model
- **Low latency / edge use**: `Qwen/Qwen3-0.6B`, `Qwen/Qwen3.5-0.8B`, `LiquidAI/LFM2.5-1.2B-Thinking`
- **Balanced quality and speed**: `Qwen/Qwen3.5-9B`, `mistralai/Ministral-3-14B-Instruct-2512-GGUF`
- **Reasoning / chain-of-thought**: `LiquidAI/LFM2.5-1.2B-Thinking`, `ServiceNow-AI/Apriel-1.6-15b-Thinker`
- **Multilingual**: `utter-project/EuroLLM-9B-Instruct-2512`
- **Medical**: `google/medgemma-27b-it`
- **Highest quality**: `moonshotai/Kimi-Linear-48B-A3B-Instruct`, `Qwen/Qwen3.5-35B-A3B`