SurfSense/surfsense_backend/app/config
PR Bot 760aa38225 feat: complete MiniMax LLM provider integration
Add full MiniMax provider support across the entire stack:

Backend:
- Add MINIMAX to LiteLLMProvider enum in db.py
- Add MINIMAX mapping to all provider_map dicts in llm_service.py,
  llm_router_service.py, and llm_config.py
- Add Alembic migration (rev 106) for PostgreSQL enum
- Add MiniMax M2.5 example in global_llm_config.example.yaml

Frontend:
- Add MiniMax to LLM_PROVIDERS enum with apiBase
- Add MiniMax-M2.5 and MiniMax-M2.5-highspeed to LLM_MODELS
- Add MINIMAX to Zod validation schema
- Add MiniMax SVG icon and wire up in provider-icons

Docs:
- Add MiniMax setup guide in chinese-llm-setup.md

MiniMax uses an OpenAI-compatible API (https://api.minimax.io/v1)
with models supporting up to 204K context window.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 07:27:47 +08:00
..
__init__.py merge upstream/dev into improve-ux-connectors 2026-03-10 23:40:04 +02:00
global_llm_config.example.yaml feat: complete MiniMax LLM provider integration 2026-03-13 07:27:47 +08:00
model_list_fallback.json feat: added improved llm model selector 2026-02-20 14:28:01 -08:00
uvicorn.py Fixed all ruff lint and formatting errors 2025-07-24 14:43:48 -07:00