feat: complete MiniMax LLM provider integration

Add full MiniMax provider support across the entire stack:

Backend:
- Add MINIMAX to LiteLLMProvider enum in db.py
- Add MINIMAX mapping to all provider_map dicts in llm_service.py,
  llm_router_service.py, and llm_config.py
- Add Alembic migration (rev 106) for PostgreSQL enum
- Add MiniMax M2.5 example in global_llm_config.example.yaml

Frontend:
- Add MiniMax to LLM_PROVIDERS enum with apiBase
- Add MiniMax-M2.5 and MiniMax-M2.5-highspeed to LLM_MODELS
- Add MINIMAX to Zod validation schema
- Add MiniMax SVG icon and wire up in provider-icons

Docs:
- Add MiniMax setup guide in chinese-llm-setup.md

MiniMax uses an OpenAI-compatible API (https://api.minimax.io/v1)
with models supporting up to 204K context window.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
PR Bot 2026-03-13 07:27:47 +08:00
parent 54e56e1b9a
commit 760aa38225
13 changed files with 123 additions and 2 deletions

View file

@ -14,6 +14,7 @@ SurfSense 现已支持以下国产 LLM
- ✅ **阿里通义千问 (Alibaba Qwen)** - 阿里云通义千问大模型
- ✅ **月之暗面 Kimi (Moonshot)** - 月之暗面 Kimi 大模型
- ✅ **智谱 AI GLM (Zhipu)** - 智谱 AI GLM 系列模型
- ✅ **MiniMax** - MiniMax 大模型 (M2.5 系列204K 上下文)
---
@ -197,6 +198,52 @@ API Base URL: https://open.bigmodel.cn/api/paas/v4
---
## 5⃣ MiniMax 配置 | MiniMax Configuration
### 获取 API Key
1. 访问 [MiniMax 开放平台](https://platform.minimaxi.com/)
2. 注册并登录账号
3. 进入 **API Keys** 页面
4. 创建新的 API Key
5. 复制 API Key
### 在 SurfSense 中配置
| 字段 | 值 | 说明 |
|------|-----|------|
| **Configuration Name** | `MiniMax M2.5` | 配置名称(自定义) |
| **Provider** | `MINIMAX` | 选择 MiniMax |
| **Model Name** | `MiniMax-M2.5` | 推荐模型<br>其他选项: `MiniMax-M2.5-highspeed` |
| **API Key** | `eyJ...` | 你的 MiniMax API Key |
| **API Base URL** | `https://api.minimax.io/v1` | MiniMax API 地址 |
| **Parameters** | `{"temperature": 1.0}` | 注意temperature 必须在 (0.0, 1.0] 范围内,不能为 0 |
### 示例配置
```
Configuration Name: MiniMax M2.5
Provider: MINIMAX
Model Name: MiniMax-M2.5
API Key: eyJxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
API Base URL: https://api.minimax.io/v1
```
### 可用模型
- **MiniMax-M2.5**: 高性能通用模型204K 上下文窗口(推荐)
- **MiniMax-M2.5-highspeed**: 高速推理版本204K 上下文窗口
### 注意事项
- **temperature 参数**: MiniMax 要求 temperature 必须在 (0.0, 1.0] 范围内,不能设置为 0。建议使用 1.0。
- 两个模型都支持 204K 超长上下文窗口,适合处理长文本任务。
### 定价
- 请访问 [MiniMax 定价页面](https://platform.minimaxi.com/document/Price) 查看最新价格
---
## ⚙️ 高级配置 | Advanced Configuration
### 自定义参数 | Custom Parameters
@ -268,8 +315,8 @@ docker compose logs backend | grep -i "error"
|---------|---------|------|
| **文档摘要** | Qwen-Plus, GLM-4 | 平衡性能和成本 |
| **代码分析** | DeepSeek-Coder | 代码专用 |
| **长文本处理** | Kimi 128K | 超长上下文 |
| **快速响应** | Qwen-Turbo, GLM-4-Flash | 速度优先 |
| **长文本处理** | Kimi 128K, MiniMax-M2.5 (204K) | 超长上下文 |
| **快速响应** | Qwen-Turbo, GLM-4-Flash, MiniMax-M2.5-highspeed | 速度优先 |
### 2. 成本优化
@ -294,6 +341,7 @@ docker compose logs backend | grep -i "error"
- [阿里云百炼文档](https://help.aliyun.com/zh/model-studio/)
- [Moonshot AI 文档](https://platform.moonshot.cn/docs)
- [智谱 AI 文档](https://open.bigmodel.cn/dev/api)
- [MiniMax 文档](https://platform.minimaxi.com/document/Guides)
### SurfSense 文档

View file

@ -0,0 +1,23 @@
"""Add MINIMAX to LiteLLMProvider enum
Revision ID: 106
Revises: 105
"""
from collections.abc import Sequence
from alembic import op
revision: str = "106"
down_revision: str | None = "105"
branch_labels: str | Sequence[str] | None = None
depends_on: str | Sequence[str] | None = None
def upgrade() -> None:
op.execute("COMMIT")
op.execute("ALTER TYPE litellmprovider ADD VALUE IF NOT EXISTS 'MINIMAX'")
def downgrade() -> None:
pass

View file

@ -59,6 +59,7 @@ PROVIDER_MAP = {
"DATABRICKS": "databricks",
"COMETAPI": "cometapi",
"HUGGINGFACE": "huggingface",
"MINIMAX": "openai",
"CUSTOM": "custom",
}

View file

@ -183,6 +183,23 @@ global_llm_configs:
use_default_system_instructions: true
citations_enabled: true
# Example: MiniMax M2.5 - High-performance with 204K context window
- id: -8
name: "Global MiniMax M2.5"
description: "MiniMax M2.5 with 204K context window and competitive pricing"
provider: "MINIMAX"
model_name: "MiniMax-M2.5"
api_key: "your-minimax-api-key-here"
api_base: "https://api.minimax.io/v1"
rpm: 60
tpm: 100000
litellm_params:
temperature: 1.0 # MiniMax requires temperature in (0.0, 1.0], cannot be 0
max_tokens: 4000
system_instructions: ""
use_default_system_instructions: true
citations_enabled: true
# =============================================================================
# Image Generation Configuration
# =============================================================================

View file

@ -215,6 +215,7 @@ class LiteLLMProvider(StrEnum):
COMETAPI = "COMETAPI"
HUGGINGFACE = "HUGGINGFACE"
GITHUB_MODELS = "GITHUB_MODELS"
MINIMAX = "MINIMAX"
CUSTOM = "CUSTOM"

View file

@ -85,6 +85,7 @@ PROVIDER_MAP = {
"ZHIPU": "openai",
"GITHUB_MODELS": "github",
"HUGGINGFACE": "huggingface",
"MINIMAX": "openai",
"CUSTOM": "custom",
}

View file

@ -127,6 +127,7 @@ async def validate_llm_config(
"ALIBABA_QWEN": "openai",
"MOONSHOT": "openai",
"ZHIPU": "openai", # GLM needs special handling
"MINIMAX": "openai",
"GITHUB_MODELS": "github",
}
provider_prefix = provider_map.get(provider, provider.lower())
@ -277,6 +278,7 @@ async def get_search_space_llm_instance(
"ALIBABA_QWEN": "openai",
"MOONSHOT": "openai",
"ZHIPU": "openai",
"MINIMAX": "openai",
}
provider_prefix = provider_map.get(
global_config["provider"], global_config["provider"].lower()
@ -350,6 +352,7 @@ async def get_search_space_llm_instance(
"ALIBABA_QWEN": "openai",
"MOONSHOT": "openai",
"ZHIPU": "openai",
"MINIMAX": "openai",
"GITHUB_MODELS": "github",
}
provider_prefix = provider_map.get(

View file

@ -12,6 +12,7 @@ export { default as FireworksAiIcon } from "./fireworksai.svg";
export { default as GeminiIcon } from "./gemini.svg";
export { default as GroqIcon } from "./groq.svg";
export { default as HuggingFaceIcon } from "./huggingface.svg";
export { default as MiniMaxIcon } from "./minimax.svg";
export { default as MistralIcon } from "./mistral.svg";
export { default as MoonshotIcon } from "./moonshot.svg";
export { default as NscaleIcon } from "./nscale.svg";

View file

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"><path fill="currentColor" d="M21.6 4.8h-2.4l-4.2 7.2-3-5.16h-.01L9 12 4.8 4.8H2.4L9 16.14V20.4h2.4v-4.2h1.2v4.2h2.4v-4.26z"/></svg>

After

Width:  |  Height:  |  Size: 192 B

View file

@ -1525,6 +1525,20 @@ export const LLM_MODELS: LLMModel[] = [
provider: "GITHUB_MODELS",
contextWindow: "64K",
},
// MiniMax
{
value: "MiniMax-M2.5",
label: "MiniMax M2.5",
provider: "MINIMAX",
contextWindow: "204K",
},
{
value: "MiniMax-M2.5-highspeed",
label: "MiniMax M2.5 Highspeed",
provider: "MINIMAX",
contextWindow: "204K",
},
];
// Helper function to get models by provider

View file

@ -181,6 +181,13 @@ export const LLM_PROVIDERS: LLMProvider[] = [
description: "AI models from GitHub Marketplace",
apiBase: "https://models.github.ai/inference",
},
{
value: "MINIMAX",
label: "MiniMax",
example: "MiniMax-M2.5, MiniMax-M2.5-highspeed",
description: "High-performance models with 204K context",
apiBase: "https://api.minimax.io/v1",
},
{
value: "CUSTOM",
label: "Custom Provider",

View file

@ -34,6 +34,7 @@ export const liteLLMProviderEnum = z.enum([
"COMETAPI",
"HUGGINGFACE",
"GITHUB_MODELS",
"MINIMAX",
"CUSTOM",
]);

View file

@ -15,6 +15,7 @@ import {
GeminiIcon,
GroqIcon,
HuggingFaceIcon,
MiniMaxIcon,
MistralIcon,
MoonshotIcon,
NscaleIcon,
@ -85,6 +86,8 @@ export function getProviderIcon(
return <GroqIcon className={cn(className)} />;
case "HUGGINGFACE":
return <HuggingFaceIcon className={cn(className)} />;
case "MINIMAX":
return <MiniMaxIcon className={cn(className)} />;
case "MISTRAL":
return <MistralIcon className={cn(className)} />;
case "MOONSHOT":