Commit graph

244 commits

Author SHA1 Message Date
403abd5357
fix: added PAT REGISTRY_TOKEN 2026-04-02 19:26:43 +02:00
0a69e56e61
fix: secrets 2026-04-02 18:42:52 +02:00
9a1dff4649
fix: buildkit container network access 2026-04-02 17:37:39 +02:00
ceed676b94
fix: add dns 2026-04-02 17:16:53 +02:00
49c7030e1d
fix: revert registry url 2026-04-02 16:56:08 +02:00
df445bab88
fix: correct registry path 2026-04-02 14:51:05 +02:00
a5fb82d006
fix: remove dind 2026-04-02 13:42:49 +02:00
82f2c034da
fix: wait for docker daemon for docker info to succeed 2026-04-02 13:39:25 +02:00
3d01ba7408
fix: bypass automatic repo base_url 2026-04-02 13:35:49 +02:00
929a972c16
fix: repo url 2026-04-02 13:30:01 +02:00
1e709814c7
feat: add forgejo workflows 2026-04-02 12:49:04 +02:00
b899ac8559
feat: add all models to TPS graph in dashboard 2026-04-01 18:10:48 +02:00
f0dd124118 doc: update repo base_url 2026-04-01 17:00:14 +02:00
031de165a1 feat: prettyfy dashboard
Some checks failed
Build and Publish Docker Image / build-and-push (push) Has been cancelled
Build and Publish Docker Image (Semantic Cache) / build-and-push-semantic (push) Has been cancelled
2026-03-27 16:24:57 +01:00
c796fd6a47 fix: add missing git to docker for semcache dependency install 2026-03-23 17:06:46 +01:00
c0dc0a10af fix: catch non-standard openai sdk error bodies for parsing 2026-03-12 19:08:01 +01:00
1e9996c393 fix: exclude embedding models from preemptive context shift caches 2026-03-12 18:56:51 +01:00
21d6835253
Merge pull request #37 from nomyo-ai/dev-v0.7.x-semcache
Dev v0.7.x semcache addtl. feature
2026-03-12 16:08:23 +01:00
e416542bf8 fix: model name normalization for context_cash preemptive context-shifting for smaller context-windows with previous failure 2026-03-12 16:08:01 +01:00
be60a348e1 fix: changing error_cache to stale-while-revalidate same as available_models_cache 2026-03-12 14:47:54 +01:00
9acc37951a feat: add reactive auto context-shift in openai endpoints to prevent recover from out of context errors 2026-03-12 10:15:52 +01:00
95c643109a feat: add an openai retry if request with image is send to a pure text model 2026-03-12 10:06:18 +01:00
1ae989788b fix(router): normalize multimodal input to extract text for embeddings
Extract text parts from multimodal payloads (lists/dicts).
Skip image_url and other non-text types to ensure embedding
models receive compatible text-only input.
2026-03-11 16:41:21 +01:00
7468bfffbb
Merge branch 'main' into dev-v0.7.x 2026-03-11 09:47:13 +01:00
ca773d6ddb
Merge pull request #35 from nomyo-ai/dev-v0.7.x-semcache
Dev v0.7.x semcache -> dev-v0.7.x
2026-03-11 09:40:55 +01:00
46da392a53 fix: semcache version pinned 2026-03-11 09:40:00 +01:00
95d03d828e
Merge pull request #34 from nomyo-ai/dev-v0.7.x
docs: adding ghcr docker pull instructions
2026-03-10 15:58:45 +01:00
fbdc73eebb fix: improvements, fixes and opt-in cache
doc: semantic-cache.md added with detailed write-up
2026-03-10 15:19:37 +01:00
a5108486e3 conf: clean default conf 2026-03-08 09:35:40 +01:00
e8b8981421 doc: updated usage.md 2026-03-08 09:26:53 +01:00
dd4b12da6a feat: adding a semantic cache layer 2026-03-08 09:12:09 +01:00
c3d47c7ffe docs: adding ghcr docker pull instructions 2026-03-05 11:54:42 +01:00
cce8e66c3e
Merge pull request #32 from nomyo-ai/dev-v0.7.x
Dev v0.7.x -> main
2026-03-05 11:12:38 +01:00
b951cc82e3 bump version 2026-03-05 11:09:20 +01:00
00a06dca51 feat: add docker publish workflow 2026-03-05 11:09:16 +01:00
e51969a2bb
Merge pull request #30 from nomyo-ai/dev-v0.7.x
- improved performance
- added /v1/rerank endpoint
- refactor of choose_endpoints for atomic upgrade of usage counters
- fixes for security, type- and keyerrors
- improved database handling
2026-03-04 11:01:22 +01:00
8037706f0b fix(db.py): remove full table scans with proper where clauses for dashboard statistics and calc in db rather than python 2026-03-03 17:20:33 +01:00
45315790d1 fix(router.py):
- added global for orphaned token_worker_task and flust_task
- fixed a regex to effectively _mask_secrets
- fixed several Type and KeyErrors
- fixed model deduplication for llama_server_endpoints
2026-03-03 16:34:16 +01:00
e96e890511 refactor: make choose_endpoint use cache incrementer for atomic updates 2026-03-03 14:57:37 +01:00
e7196146ad feat: add uvloop to requirements.txt as optional dependency to improve performance in high concurrent scenarios 2026-03-03 10:31:10 +01:00
10c83c3e1e fix(router): treat missing status as loaded for llama model check
Add check for `status is None` in `_is_llama_model_loaded`.
Models without a status field (e.g., single-model servers) are
assumed to be always loaded rather than failing the check.
Also updated docstring to clarify this behavior.
2026-03-02 08:54:46 +01:00
cac0580eec feat: adding /v1/rerank endpoint with cohere,jina,llama.cpp compatibility 2026-02-28 09:31:25 +01:00
ad4a1d07b2 fix(/v1/embeddings): returning the async_gen forced FastAPI serialization which caused Pydantic Errors. Also sanizted nan/inf values to floats (0.0).
Use try - finally to properly decrement usage counters in case of error.
2026-02-27 16:39:27 +01:00
2542f10dfc
Merge pull request #29 from edingc/main
fix: supress dockerfile build warnings
2026-02-25 13:15:40 +01:00
a5a0bd51c0
Merge pull request #27 from nomyo-ai/dev-v0.6.X
Dev v0.6.x to prod
2026-02-25 13:08:15 +01:00
Cody Eding
d17ce8380d fix: supress dockerfile build warnings 2026-02-19 19:41:58 -05:00
d2ea65f74a fix(router): use normalized model keys for endpoint selection
Refactor endpoint selection logic to consistently use tracking model keys (normalized via `get_tracking_model`) instead of raw model names, ensuring usage counts are accurately compared with how increment/decrement operations store them. This fixes inconsistent load balancing and model affinity behavior caused by mismatches between raw and tracked model identifiers.
2026-02-19 17:32:54 +01:00
07751ddd3b fix: endpoint selection logic again 2026-02-19 10:11:53 +01:00
7cba67cce0 feat(router): normalize model names for usage tracking across endpoints (continued)
Introduce `get_tracking_model()` to standardize model names for consistent usage tracking in Prometheus metrics. This ensures llama-server models are stripped of HF prefixes and quantization suffixes, Ollama models append `:latest` when versionless, and external OpenAI models remain unchanged—aligning all tracking keys with the PS table.
2026-02-18 11:45:37 +01:00
b2980a7d24 fix(router): handle invalid version responses with 503 error
Filter out non-string version responses (e.g., empty lists from failed requests) and return a 503 Service Unavailable error if no valid versions are received from any endpoint.
2026-02-17 15:56:09 +01:00