doc: added api-rate limits and models

This commit is contained in:
Alpha Nerd 2026-04-13 14:59:33 +02:00
parent 85426b365d
commit 19d6d1f32c
Signed by: alpha-nerd
SSH key fingerprint: SHA256:QkkAgVoYi9TQ0UKPkiKSfnerZy2h4qhi3SVPXJmBN+M
3 changed files with 120 additions and 3 deletions

View file

@ -44,9 +44,11 @@ asyncio.run(main())
1. [Installation](installation.md) - How to install and set up the client
2. [Getting Started](getting-started.md) - Quick start guide with examples
3. [API Reference](api-reference.md) - Complete API documentation
4. [Security Guide](security-guide.md) - Security features and best practices
5. [Examples](examples.md) - Advanced usage scenarios
6. [Troubleshooting](troubleshooting.md) - Common issues and solutions
4. [Models](models.md) - Available models and selection guide
5. [Security Guide](security-guide.md) - Security features and best practices
6. [Examples](examples.md) - Advanced usage scenarios
7. [Rate Limits](rate-limits.md) - Request limits, burst allowance, and error handling
8. [Troubleshooting](troubleshooting.md) - Common issues and solutions
## Key Features

48
doc/models.md Normal file
View file

@ -0,0 +1,48 @@
# Available Models
All models are available via `api.nomyo.ai`. Pass the model ID string directly to the `model` parameter of `create()`.
## Model List
| Model ID | Parameters | Type | Notes |
|---|---|---|---|
| `Qwen/Qwen3-0.6B` | 0.6B | General | Lightweight, fast inference |
| `Qwen/Qwen3.5-0.8B` | 0.8B | General | Lightweight, fast inference |
| `LiquidAI/LFM2.5-1.2B-Thinking` | 1.2B | Thinking | Reasoning model |
| `ibm-granite/granite-4.0-h-small` | Small | General | IBM Granite 4.0, enterprise-focused |
| `Qwen/Qwen3.5-9B` | 9B | General | Balanced quality and speed |
| `utter-project/EuroLLM-9B-Instruct-2512` | 9B | General | Multilingual, strong European language support |
| `zai-org/GLM-4.7-Flash` | — | General | Fast GLM variant |
| `mistralai/Ministral-3-14B-Instruct-2512-GGUF` | 14B | General | Mistral instruction-tuned |
| `ServiceNow-AI/Apriel-1.6-15b-Thinker` | 15B | Thinking | Reasoning model |
| `openai/gpt-oss-20b` | 20B | General | OpenAI open-weight release |
| `LiquidAI/LFM2-24B-A2B` | 24B (2B active) | General | MoE — efficient inference |
| `Qwen/Qwen3.5-27B` | 27B | General | High quality, large context |
| `google/medgemma-27b-it` | 27B | Specialized | Medical domain, instruction-tuned |
| `nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4` | 30B (3B active) | General | MoE — efficient inference |
| `Qwen/Qwen3.5-35B-A3B` | 35B (3B active) | General | MoE — efficient inference |
| `moonshotai/Kimi-Linear-48B-A3B-Instruct` | 48B (3B active) | General | MoE — large capacity, efficient inference |
> **MoE** (Mixture of Experts) models show total/active parameter counts. Only active parameters are used per token, keeping inference cost low relative to total model size.
## Usage Example
```python
from nomyo import SecureChatCompletion
client = SecureChatCompletion(api_key="your-api-key")
response = await client.create(
model="Qwen/Qwen3.5-9B",
messages=[{"role": "user", "content": "Hello!"}]
)
```
## Choosing a Model
- **Low latency / edge use**: `Qwen/Qwen3-0.6B`, `Qwen/Qwen3.5-0.8B`, `LiquidAI/LFM2.5-1.2B-Thinking`
- **Balanced quality and speed**: `Qwen/Qwen3.5-9B`, `mistralai/Ministral-3-14B-Instruct-2512-GGUF`
- **Reasoning / chain-of-thought**: `LiquidAI/LFM2.5-1.2B-Thinking`, `ServiceNow-AI/Apriel-1.6-15b-Thinker`
- **Multilingual**: `utter-project/EuroLLM-9B-Instruct-2512`
- **Medical**: `google/medgemma-27b-it`
- **Highest quality**: `moonshotai/Kimi-Linear-48B-A3B-Instruct`, `Qwen/Qwen3.5-35B-A3B`

67
doc/rate-limits.md Normal file
View file

@ -0,0 +1,67 @@
# Rate Limits
The NOMYO API (`api.nomyo.ai`) enforces rate limits to ensure fair usage and service stability for all users.
## Default Rate Limit
By default, each API key is limited to **2 requests per second**.
## Burst Allowance
Short bursts above the default limit are permitted. You may send up to **4 requests per second** in burst mode, provided you have not exceeded burst usage within the current **10-second window**.
Burst capacity is granted once per 10-second window. If you consume the burst allowance, you must wait for the window to reset before burst is available again.
## Rate Limit Summary
| Mode | Limit | Condition |
|---------|--------------------|----------------------------------|
| Default | 2 requests/second | Always active |
| Burst | 4 requests/second | Once per 10-second window |
## Error Responses
### 429 Too Many Requests
Returned when your request rate exceeds the allowed limit.
```
HTTP/1.1 429 Too Many Requests
```
**What to do:** Back off and retry after a short delay. Implement exponential backoff in your client to avoid repeated limit hits.
### 503 Service Unavailable (Cool-down)
Returned when burst limits are abused repeatedly. A **30-minute cool-down** is applied to the offending API key.
```
HTTP/1.1 503 Service Unavailable
```
**What to do:** Wait 30 minutes before retrying. Review your request patterns to ensure you stay within the permitted limits.
## Best Practices
- **Throttle your requests** client-side to stay at or below 2 requests/second under normal load.
- **Use burst sparingly** — it is intended for occasional spikes, not sustained high-throughput usage.
- **Implement exponential backoff** when you receive a `429` response. Start with a short delay (e.g. 500 ms) and double it on each subsequent failure, up to a reasonable maximum.
- **Monitor for `503` responses** — repeated occurrences indicate that your usage pattern is triggering the abuse threshold. Refactor your request logic before the cool-down expires.
## Example: Exponential Backoff
```python
import asyncio
import httpx
async def request_with_backoff(client, *args, max_retries=5, **kwargs):
delay = 0.5
for attempt in range(max_retries):
response = await client.create(*args, **kwargs)
if response.status_code == 429:
await asyncio.sleep(delay)
delay = min(delay * 2, 30)
continue
return response
raise RuntimeError("Rate limit exceeded after maximum retries")
```