Anish Sarkar
680a1c1c38
refactor(openrouter): remove virtual openrouter/free auto-select entry
2026-05-01 18:16:47 +05:30
Anish Sarkar
925c33abd1
chore(config): deprecate billing_tier / anonymous_enabled, split anon flags
2026-05-01 17:42:44 +05:30
DESKTOP-RTLN3BA\$punk
4a51ccdc2c
cloud: added openrouter integration with global configs
2026-04-15 23:46:29 -07:00
DESKTOP-RTLN3BA\$punk
ff4e0f9b62
feat: no login experience and prem tokens
Build and Push Docker Images / tag_release (push) Waiting to run
Build and Push Docker Images / build (./surfsense_backend, ./surfsense_backend/Dockerfile, backend, surfsense-backend, ubuntu-24.04-arm, linux/arm64, arm64) (push) Blocked by required conditions
Build and Push Docker Images / build (./surfsense_backend, ./surfsense_backend/Dockerfile, backend, surfsense-backend, ubuntu-latest, linux/amd64, amd64) (push) Blocked by required conditions
Build and Push Docker Images / build (./surfsense_web, ./surfsense_web/Dockerfile, web, surfsense-web, ubuntu-24.04-arm, linux/arm64, arm64) (push) Blocked by required conditions
Build and Push Docker Images / build (./surfsense_web, ./surfsense_web/Dockerfile, web, surfsense-web, ubuntu-latest, linux/amd64, amd64) (push) Blocked by required conditions
Build and Push Docker Images / create_manifest (backend, surfsense-backend) (push) Blocked by required conditions
Build and Push Docker Images / create_manifest (web, surfsense-web) (push) Blocked by required conditions
2026-04-15 17:02:00 -07:00
CREDO23
36b8a84b0b
Add vision LLM config examples to global_llm_config.example.yaml
2026-04-07 21:55:58 +02:00
Anish Sarkar
000c2d9b5b
style: simplify LLM model terminology in UI
2026-04-02 10:11:35 +05:30
PR Bot
760aa38225
feat: complete MiniMax LLM provider integration
...
Add full MiniMax provider support across the entire stack:
Backend:
- Add MINIMAX to LiteLLMProvider enum in db.py
- Add MINIMAX mapping to all provider_map dicts in llm_service.py,
llm_router_service.py, and llm_config.py
- Add Alembic migration (rev 106) for PostgreSQL enum
- Add MiniMax M2.5 example in global_llm_config.example.yaml
Frontend:
- Add MiniMax to LLM_PROVIDERS enum with apiBase
- Add MiniMax-M2.5 and MiniMax-M2.5-highspeed to LLM_MODELS
- Add MINIMAX to Zod validation schema
- Add MiniMax SVG icon and wire up in provider-icons
Docs:
- Add MiniMax setup guide in chinese-llm-setup.md
MiniMax uses an OpenAI-compatible API (https://api.minimax.io/v1 )
with models supporting up to 204K context window.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 07:27:47 +08:00
DESKTOP-RTLN3BA\$punk
0d031cb2c2
refactor: update image generation configuration to remove TPM references and clarify RPM usage in comments
2026-02-05 18:07:27 -08:00
DESKTOP-RTLN3BA\$punk
19e2857343
feat: added image gen support
2026-02-05 16:43:48 -08:00
DESKTOP-RTLN3BA\$punk
6f92eac3da
try(hotpatch): add autoscaling command
2026-02-02 11:36:54 -08:00
DESKTOP-RTLN3BA\$punk
6fb656fd8f
hotpatch(cloud): add llm load balancing
2026-01-29 15:28:31 -08:00
DESKTOP-RTLN3BA\$punk
4a0c3e368a
feat: migrated to surfsense deep agent
2025-12-23 01:16:25 -08:00
DESKTOP-RTLN3BA\$punk
d4345f75e5
feat: added global llm configurations
2025-11-14 21:53:46 -08:00