Commit graph

5 commits

Author SHA1 Message Date
PR Bot
760aa38225 feat: complete MiniMax LLM provider integration
Add full MiniMax provider support across the entire stack:

Backend:
- Add MINIMAX to LiteLLMProvider enum in db.py
- Add MINIMAX mapping to all provider_map dicts in llm_service.py,
  llm_router_service.py, and llm_config.py
- Add Alembic migration (rev 106) for PostgreSQL enum
- Add MiniMax M2.5 example in global_llm_config.example.yaml

Frontend:
- Add MiniMax to LLM_PROVIDERS enum with apiBase
- Add MiniMax-M2.5 and MiniMax-M2.5-highspeed to LLM_MODELS
- Add MINIMAX to Zod validation schema
- Add MiniMax SVG icon and wire up in provider-icons

Docs:
- Add MiniMax setup guide in chinese-llm-setup.md

MiniMax uses an OpenAI-compatible API (https://api.minimax.io/v1)
with models supporting up to 204K context window.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 07:27:47 +08:00
DESKTOP-RTLN3BA\$punk
a3cd598e01 feat: added improved llm model selector 2026-02-20 14:28:01 -08:00
CREDO23
8bc4b255b4 Add GitHub Models frontend provider and model suggestions 2026-02-09 17:54:48 +02:00
Eric Lammertsma
1c1dcbf47f Add Gemini 3 Flash and Pro models to LLM_MODELS enum 2026-02-04 15:52:21 -05:00
DESKTOP-RTLN3BA\$punk
38dffaffa3 feat(llm): expand LLM provider options and improve model selection UI
- Added new LLM providers including Google, Azure OpenAI, Bedrock, and others to the backend.
- Updated the model selection UI to dynamically display available models based on the selected provider.
- Enhanced the provider change handling to reset the model selection when the provider is changed.
- Improved the overall user experience by providing contextual information for model selection.
2025-11-13 02:41:30 -08:00