# Confirm endpoints are exactly as in endpoints block
api_keys:
"http://192.168.0.50:11434": "ollama"
"http://192.168.0.51:11434": "ollama"
"http://192.168.0.52:11434": "ollama"
"https://api.openai.com/v1": "${OPENAI_KEY}"
```
## Configuration Options
### `endpoints`
**Type**: `list[str]`
**Description**: List of Ollama endpoint URLs. Can include both Ollama endpoints (`http://host:11434`) and OpenAI-compatible endpoints (`https://api.openai.com/v1`).
**Examples**:
```yaml
endpoints:
- http://localhost:11434
- http://ollama1:11434
- http://ollama2:11434
- https://api.openai.com/v1
- https://api.anthropic.com/v1
```
**Notes**:
- Ollama endpoints use the standard `/api/` prefix
- OpenAI-compatible endpoints use `/v1` prefix
- The router automatically detects endpoint type based on URL pattern
### `max_concurrent_connections`
**Type**: `int`
**Default**: `1`
**Description**: Maximum number of concurrent connections allowed per endpoint-model pair. This corresponds to Ollama's `OLLAMA_NUM_PARALLEL` setting.
**Example**:
```yaml
max_concurrent_connections: 4
```
**Notes**:
- This setting controls how many requests can be processed simultaneously for a specific model on a specific endpoint
- When this limit is reached, the router will route requests to other endpoints with available capacity
- Higher values allow more parallel requests but may increase memory usage
**Description**: Shared secret that gates access to the NOMYO Router APIs and dashboard. When set, clients must send `Authorization: Bearer <key>` or an `api_key` query parameter.
**Example**:
```yaml
nomyo-router-api-key: "super-secret-value"
```
**Notes**:
- Leave this blank or omit it to disable router-level authentication.
- You can also set the `NOMYO_ROUTER_API_KEY` environment variable to avoid storing the key in plain text.
**Description**: Path to the SQLite database file for storing token counts. If not set, defaults to `token_counts.db` in the current working directory.
**Description**: Router-level API key. When set, all router endpoints and the dashboard require this key via `Authorization: Bearer <key>` or the `api_key` query parameter.
NOMYO Router can cache LLM responses and serve them directly — skipping endpoint selection, model load, and token generation entirely.
### How it works
1. On every cacheable request (`/api/chat`, `/api/generate`, `/v1/chat/completions`, `/v1/completions`) the cache is checked **before** choosing an endpoint.
2. On a **cache hit** the stored response is returned immediately as a single chunk (streaming or non-streaming — both work).
3. On a **cache miss** the request is forwarded normally. The response is stored in the cache after it completes.
4.**MOE requests** (`moe-*` model prefix) always bypass the cache.
5.**Token counts** are never recorded for cache hits.
### Cache key strategy
| Signal | How matched |
|---|---|
| `model + system_prompt` | Exact — hard context isolation per deployment |
| BM25-weighted embedding of chat history | Semantic — conversation context signal |
| Embedding of last user message | Semantic — the actual question |
The two semantic vectors are combined as a weighted mean (tuned by `cache_history_weight`) before cosine similarity comparison, staying at a single 384-dimensional vector compatible with the library's storage format.
### Quick start — exact match (lean image)
```yaml
cache_enabled: true
cache_backend: sqlite # persists across restarts
cache_similarity: 1.0 # exact match only, no sentence-transformers needed
Enable or disable the cache. All other cache settings are ignored when `false`.
#### `cache_backend`
**Type**: `str` | **Default**: `"memory"`
| Value | Description | Persists | Multi-replica |
|---|---|---|---|
| `memory` | In-process LRU dict | ❌ | ❌ |
| `sqlite` | File-based via `aiosqlite` | ✅ | ❌ |
| `redis` | Redis via `redis.asyncio` | ✅ | ✅ |
Use `redis` when running multiple router replicas behind a load balancer — all replicas share one warm cache.
#### `cache_similarity`
**Type**: `float` | **Default**: `1.0`
Cosine similarity threshold. `1.0` means exact match only (no embedding model needed). Values below `1.0` enable semantic matching, which requires the `:semantic` Docker image tag.
Recommended starting value for semantic mode: `0.90`.
#### `cache_ttl`
**Type**: `int | null` | **Default**: `3600`
Time-to-live for cache entries in seconds. Remove the key or set to `null` to cache forever.
#### `cache_db_path`
**Type**: `str` | **Default**: `"llm_cache.db"`
Path to the SQLite cache database. Only used when `cache_backend: sqlite`.
Redis connection URL. Only used when `cache_backend: redis`.
#### `cache_history_weight`
**Type**: `float` | **Default**: `0.3`
Weight of the BM25-weighted chat-history embedding in the combined cache key vector. `0.3` means the history contributes 30% and the final user message contributes 70% of the similarity signal. Only used when `cache_similarity < 1.0`.
### Cache management endpoints
| Endpoint | Method | Description |
|---|---|---|
| `/api/cache/stats` | `GET` | Hit/miss counters, hit rate, current config |
| `/api/cache/invalidate` | `POST` | Clear all cache entries and reset counters |
```bash
# Check cache performance
curl http://localhost:12434/api/cache/stats
# Clear the cache
curl -X POST http://localhost:12434/api/cache/invalidate