Merge remote-tracking branch 'upstream/dev' into fix/ui
1
.vscode/launch.json
vendored
|
|
@ -71,6 +71,7 @@
|
|||
"app.celery_app:celery_app",
|
||||
"worker",
|
||||
"--loglevel=info",
|
||||
"--queues=surfsense,surfsense.connectors,surfsense-dev,surfsense-dev.connectors",
|
||||
"--pool=solo"
|
||||
],
|
||||
"console": "integratedTerminal",
|
||||
|
|
|
|||
269
README.es.md
|
|
@ -15,6 +15,9 @@
|
|||
|
||||
[English](README.md) | [Español](README.es.md) | [Português](README.pt-BR.md) | [हिन्दी](README.hi.md) | [简体中文](README.zh-CN.md)
|
||||
|
||||
</div>
|
||||
<div align="center">
|
||||
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
# SurfSense
|
||||
|
|
@ -22,101 +25,98 @@ Conecta cualquier LLM a tus fuentes de conocimiento internas y chatea con él en
|
|||
|
||||
SurfSense es un agente de investigación de IA altamente personalizable, conectado a fuentes externas como motores de búsqueda (SearxNG, Tavily, LinkUp), Google Drive, Slack, Microsoft Teams, Linear, Jira, ClickUp, Confluence, BookStack, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar, Luma, Circleback, Elasticsearch, Obsidian y más por venir.
|
||||
|
||||
<div align="center">
|
||||
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
|
||||
# Video
|
||||
# Video
|
||||
|
||||
https://github.com/user-attachments/assets/cc0c84d3-1f2f-4f7a-b519-2ecce22310b1
|
||||
|
||||
|
||||
## Ejemplo de Podcast
|
||||
|
||||
https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
|
||||
|
||||
|
||||
## Cómo usar SurfSense
|
||||
|
||||
### Cloud
|
||||
|
||||
1. Ve a [surfsense.com](https://www.surfsense.com) e inicia sesión.
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/b4df25fe-db5a-43c2-9462-b75cf7f1b707" alt="Login" /></p>
|
||||
|
||||
2. Conecta tus conectores y sincroniza. Activa la sincronización periódica para mantenerlos actualizados.
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/59da61d7-da05-4576-b7c0-dbc09f5985e8" alt="Conectores" /></p>
|
||||
|
||||
3. Mientras se indexan los datos de los conectores, sube documentos.
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/d1e8b2e2-9eac-41d8-bdc0-f0cdc405d128" alt="Subir Documentos" /></p>
|
||||
|
||||
4. Una vez que todo esté indexado, pregunta lo que quieras (Casos de uso):
|
||||
|
||||
- Búsqueda básica y citaciones
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/81e797a1-e01a-4003-8e60-0a0b3a9789df" alt="Búsqueda y Citación" /></p>
|
||||
|
||||
- QNA con mención de documentos
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/be958295-0a8c-4707-998c-9fe1f1c007be" alt="QNA con Mención de Documentos" /></p>
|
||||
|
||||
- Generación de informes y exportaciones (PDF, DOCX por ahora)
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/9836b7d6-57c9-4951-b61c-68202c9b6ace" alt="Generación de Informes" /></p>
|
||||
|
||||
- Generación de podcasts
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/58c9b057-8848-4e81-aaba-d2c617985d8c" alt="Generación de Podcasts" /></p>
|
||||
|
||||
- Generación de imágenes
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/25f94cb3-18f8-4854-afd9-27b7bfd079cb" alt="Generación de Imágenes" /></p>
|
||||
|
||||
|
||||
### Auto-Hospedado
|
||||
|
||||
Ejecuta SurfSense en tu propia infraestructura para control total de datos y privacidad.
|
||||
|
||||
**Inicio Rápido (Docker en un solo comando):**
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
Después de iniciar, abre [http://localhost:3000](http://localhost:3000) en tu navegador.
|
||||
|
||||
Para Docker Compose, instalación manual y otras opciones de despliegue, consulta la [documentación](https://www.surfsense.com/docs/).
|
||||
|
||||
## Funcionalidades Principales
|
||||
|
||||
### 💡 **Idea**:
|
||||
- Alternativa de código abierto a NotebookLM, Perplexity y Glean. Conecta cualquier LLM a tus fuentes de conocimiento internas y colabora con tu equipo en tiempo real.
|
||||
### 📁 **Soporte de Múltiples Formatos de Archivo**
|
||||
- Guarda contenido de tus archivos personales *(Documentos, imágenes, videos y soporta **más de 50 extensiones de archivo**)* en tu propia base de conocimiento personal.
|
||||
### 🔍 **Búsqueda Potente**
|
||||
- Investiga o encuentra rápidamente cualquier cosa en tu contenido guardado.
|
||||
### 💬 **Chatea con tu Contenido Guardado**
|
||||
- Interactúa en lenguaje natural y obtén respuestas con citas.
|
||||
### 📄 **Respuestas con Citas**
|
||||
- Obtén respuestas con citas como en Perplexity.
|
||||
### 🧩 **Compatibilidad Universal**
|
||||
- Conecta virtualmente cualquier proveedor de inferencia a través de la especificación OpenAI y LiteLLM.
|
||||
### 🔔 **Privacidad y Soporte de LLM Local**
|
||||
- Funciona perfectamente con LLMs locales como vLLM y Ollama.
|
||||
### 🏠 **Auto-Hospedable**
|
||||
- Código abierto y fácil de desplegar localmente.
|
||||
### 👥 **Colaboración en Equipo con RBAC**
|
||||
- Control de acceso basado en roles para los espacios de búsqueda
|
||||
- Invita a miembros del equipo con roles personalizables (Propietario, Admin, Editor, Visor)
|
||||
- Permisos granulares para documentos, chats, conectores y configuración
|
||||
- Comparte bases de conocimiento de forma segura dentro de tu organización
|
||||
- Los chats de equipo se actualizan en tiempo real y puedes "Chatear sobre el chat" en hilos de comentarios
|
||||
### 🎙️ Podcasts
|
||||
- Agente de generación de podcasts ultrarrápido. (Crea un podcast de 3 minutos en menos de 20 segundos.)
|
||||
- Convierte tus conversaciones de chat en contenido de audio atractivo
|
||||
- Soporte para proveedores TTS locales (Kokoro TTS)
|
||||
- Soporte para múltiples proveedores TTS (OpenAI, Azure, Google Vertex AI)
|
||||
| Funcionalidad | Descripción |
|
||||
|----------------|-------------|
|
||||
| Alternativa OSS | Reemplazo directo de NotebookLM, Perplexity y Glean con colaboración en equipo en tiempo real |
|
||||
| 50+ Formatos de Archivo | Sube documentos, imágenes, videos vía LlamaCloud, Unstructured o Docling (local) |
|
||||
| Búsqueda Híbrida | Semántica + Texto completo con Índices Jerárquicos y Reciprocal Rank Fusion |
|
||||
| Respuestas con Citas | Chatea con tu base de conocimiento y obtén respuestas citadas al estilo Perplexity |
|
||||
| Arquitectura de Agentes Profundos | Impulsado por [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) con planificación, subagentes y acceso al sistema de archivos |
|
||||
| Soporte Universal de LLM | 100+ LLMs, 6000+ modelos de embeddings, todos los principales rerankers vía OpenAI spec y LiteLLM |
|
||||
| Privacidad Primero | Soporte completo de LLM local (vLLM, Ollama) tus datos son tuyos |
|
||||
| Colaboración en Equipo | RBAC con roles de Propietario / Admin / Editor / Visor, chat en tiempo real e hilos de comentarios |
|
||||
| Generación de Podcasts | Podcast de 3 min en menos de 20 segundos; múltiples proveedores TTS (OpenAI, Azure, Kokoro) |
|
||||
| Extensión de Navegador | Extensión multi-navegador para guardar cualquier página web, incluyendo páginas protegidas por autenticación |
|
||||
| 25+ Conectores | Motores de búsqueda, Google Drive, Slack, Teams, Jira, Notion, GitHub, Discord y [más](#fuentes-externas) |
|
||||
| Auto-Hospedable | Código abierto, Docker en un solo comando o Docker Compose completo para producción |
|
||||
|
||||
### 🤖 **Arquitectura de Agentes Profundos**
|
||||
- Impulsado por [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) - agentes que pueden planificar, usar subagentes y aprovechar sistemas de archivos para tareas complejas.
|
||||
<details>
|
||||
<summary><b>Lista completa de Fuentes Externas</b></summary>
|
||||
<a id="fuentes-externas"></a>
|
||||
|
||||
### 📊 **Técnicas Avanzadas de RAG**
|
||||
- Soporta más de 100 LLMs
|
||||
- Soporta más de 6000 modelos de embeddings
|
||||
- Soporta todos los principales rerankers (Pinecone, Cohere, Flashrank, etc.)
|
||||
- Utiliza índices jerárquicos (configuración RAG de 2 niveles)
|
||||
- Utiliza búsqueda híbrida (Semántica + Texto completo combinado con Reciprocal Rank Fusion)
|
||||
|
||||
### ℹ️ **Fuentes Externas**
|
||||
- Motores de búsqueda (Tavily, LinkUp)
|
||||
- SearxNG (instancias auto-hospedadas)
|
||||
- Google Drive
|
||||
- Slack
|
||||
- Microsoft Teams
|
||||
- Linear
|
||||
- Jira
|
||||
- ClickUp
|
||||
- Confluence
|
||||
- BookStack
|
||||
- Notion
|
||||
- Gmail
|
||||
- Videos de YouTube
|
||||
- GitHub
|
||||
- Discord
|
||||
- Airtable
|
||||
- Google Calendar
|
||||
- Luma
|
||||
- Circleback
|
||||
- Elasticsearch
|
||||
- Obsidian
|
||||
- y más por venir.....
|
||||
|
||||
## 📄 **Extensiones de Archivo Soportadas**
|
||||
|
||||
| Servicio ETL | Formatos | Notas |
|
||||
|--------------|----------|-------|
|
||||
| **LlamaCloud** | 50+ formatos | Documentos, presentaciones, hojas de cálculo, imágenes |
|
||||
| **Unstructured** | 34+ formatos | Formatos principales + soporte de email |
|
||||
| **Docling** | Formatos principales | Procesamiento local, no requiere clave API |
|
||||
|
||||
**Audio/Video** (vía servicio STT): `.mp3`, `.wav`, `.mp4`, `.webm`, etc.
|
||||
|
||||
### 🔖 Extensión Multi-Navegador
|
||||
- La extensión de SurfSense se puede usar para guardar cualquier página web que desees.
|
||||
- Su principal uso es guardar páginas web protegidas por autenticación.
|
||||
Motores de Búsqueda (Tavily, LinkUp) · SearxNG · Google Drive · Slack · Microsoft Teams · Linear · Jira · ClickUp · Confluence · BookStack · Notion · Gmail · Videos de YouTube · GitHub · Discord · Airtable · Google Calendar · Luma · Circleback · Elasticsearch · Obsidian, y más por venir.
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## SOLICITUDES DE FUNCIONES Y FUTURO
|
||||
|
|
@ -126,120 +126,29 @@ https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
|
|||
|
||||
¡Únete al [Discord de SurfSense](https://discord.gg/ejRNvftDp9) y ayuda a dar forma al futuro de SurfSense!
|
||||
|
||||
## 🚀 Hoja de Ruta
|
||||
## Hoja de Ruta
|
||||
|
||||
¡Mantente al día con nuestro progreso de desarrollo y próximas funcionalidades!
|
||||
Consulta nuestra hoja de ruta pública y contribuye con tus ideas o comentarios:
|
||||
|
||||
**📋 Discusión de la Hoja de Ruta:** [SurfSense 2025-2026 Roadmap: Deep Agents, Real-Time Collaboration & MCP Servers](https://github.com/MODSetter/SurfSense/discussions/565)
|
||||
**Discusión de la Hoja de Ruta:** [SurfSense 2026 Roadmap](https://github.com/MODSetter/SurfSense/discussions/565)
|
||||
|
||||
**📊 Tablero Kanban:** [SurfSense Project Board](https://github.com/users/MODSetter/projects/3)
|
||||
**Tablero Kanban:** [SurfSense Project Board](https://github.com/users/MODSetter/projects/3)
|
||||
|
||||
|
||||
## ¿Cómo empezar?
|
||||
## Contribuir
|
||||
|
||||
### Inicio Rápido con Docker 🐳
|
||||
|
||||
> [!TIP]
|
||||
> Para despliegues en producción, usa la configuración completa de [Docker Compose](https://www.surfsense.com/docs/docker-installation) que ofrece más control y escalabilidad.
|
||||
|
||||
**Linux/macOS:**
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
**Windows (PowerShell):**
|
||||
|
||||
```powershell
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 `
|
||||
-v surfsense-data:/data `
|
||||
--name surfsense `
|
||||
--restart unless-stopped `
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
**Con Configuración Personalizada:**
|
||||
|
||||
Puedes pasar cualquier variable de entorno usando flags `-e`:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
-e EMBEDDING_MODEL=openai://text-embedding-ada-002 \
|
||||
-e OPENAI_API_KEY=your_openai_api_key \
|
||||
-e AUTH_TYPE=GOOGLE \
|
||||
-e GOOGLE_OAUTH_CLIENT_ID=your_google_client_id \
|
||||
-e GOOGLE_OAUTH_CLIENT_SECRET=your_google_client_secret \
|
||||
-e ETL_SERVICE=LLAMACLOUD \
|
||||
-e LLAMA_CLOUD_API_KEY=your_llama_cloud_key \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> - Si despliegas detrás de un proxy inverso con HTTPS, agrega `-e BACKEND_URL=https://api.yourdomain.com`
|
||||
|
||||
Después de iniciar, accede a SurfSense en:
|
||||
- **Frontend**: [http://localhost:3000](http://localhost:3000)
|
||||
- **API Backend**: [http://localhost:8000](http://localhost:8000)
|
||||
- **Documentación API**: [http://localhost:8000/docs](http://localhost:8000/docs)
|
||||
- **Electric-SQL**: [http://localhost:5133](http://localhost:5133)
|
||||
|
||||
**Comandos Útiles:**
|
||||
|
||||
```bash
|
||||
docker logs -f surfsense # Ver logs
|
||||
docker stop surfsense # Detener
|
||||
docker start surfsense # Iniciar
|
||||
docker rm surfsense # Eliminar (datos preservados en el volumen)
|
||||
```
|
||||
|
||||
### Opciones de Instalación
|
||||
|
||||
SurfSense ofrece múltiples opciones para empezar:
|
||||
|
||||
1. **[SurfSense Cloud](https://www.surfsense.com/login)** - La forma más fácil de probar SurfSense sin ninguna configuración.
|
||||
- No requiere instalación
|
||||
- Acceso instantáneo a todas las funcionalidades
|
||||
- Perfecto para empezar rápidamente
|
||||
|
||||
2. **Inicio Rápido Docker (Arriba)** - Un solo comando para tener SurfSense funcionando localmente.
|
||||
- Imagen todo-en-uno con PostgreSQL, Redis y todos los servicios incluidos
|
||||
- Perfecto para evaluación, desarrollo y despliegues pequeños
|
||||
- Datos persistidos vía volumen Docker
|
||||
|
||||
3. **[Docker Compose (Producción)](https://www.surfsense.com/docs/docker-installation)** - Despliegue de stack completo con servicios separados.
|
||||
- Incluye pgAdmin para gestión de base de datos a través de interfaz web
|
||||
- Soporta personalización de variables de entorno vía archivo `.env`
|
||||
- Opciones de despliegue flexibles (stack completo o solo servicios principales)
|
||||
- Mejor para producción con escalado independiente de servicios
|
||||
|
||||
4. **[Instalación Manual](https://www.surfsense.com/docs/manual-installation)** - Para usuarios que prefieren más control sobre su configuración o necesitan personalizar su despliegue.
|
||||
|
||||
Las guías de Docker e instalación manual incluyen instrucciones detalladas específicas para Windows, macOS y Linux.
|
||||
|
||||
Antes de la instalación auto-hospedada, asegúrate de completar los [pasos de configuración previos](https://www.surfsense.com/docs/) incluyendo:
|
||||
- Configuración de autenticación (opcional - por defecto usa autenticación LOCAL)
|
||||
- **Servicio ETL de Procesamiento de Archivos** (opcional - por defecto usa Docling):
|
||||
- Docling (por defecto, procesamiento local, no requiere clave API, soporta PDF, documentos Office, imágenes, HTML, CSV)
|
||||
- Clave API de Unstructured.io (soporta 34+ formatos)
|
||||
- Clave API de LlamaIndex (análisis mejorado, soporta 50+ formatos)
|
||||
- Otras claves API según sea necesario para tu caso de uso
|
||||
|
||||
|
||||
## Contribuir
|
||||
|
||||
¡Las contribuciones son muy bienvenidas! Una contribución puede ser tan pequeña como una ⭐ o incluso encontrar y crear issues.
|
||||
¡Las contribuciones son muy bienvenidas! Una contribución puede ser tan pequeña como una estrella o incluso encontrar y crear issues.
|
||||
El ajuste fino del Backend siempre es deseado.
|
||||
|
||||
Para guías detalladas de contribución, consulta nuestro archivo [CONTRIBUTING.md](CONTRIBUTING.md).
|
||||
|
||||
## Contribuidores
|
||||
|
||||
<a href="https://github.com/MODSetter/SurfSense/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=MODSetter/SurfSense" />
|
||||
</a>
|
||||
|
||||
## Historial de Stars
|
||||
|
||||
<a href="https://www.star-history.com/#MODSetter/SurfSense&Date">
|
||||
|
|
|
|||
269
README.hi.md
|
|
@ -15,6 +15,9 @@
|
|||
|
||||
[English](README.md) | [Español](README.es.md) | [Português](README.pt-BR.md) | [हिन्दी](README.hi.md) | [简体中文](README.zh-CN.md)
|
||||
|
||||
</div>
|
||||
<div align="center">
|
||||
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
# SurfSense
|
||||
|
|
@ -22,101 +25,98 @@
|
|||
|
||||
SurfSense एक अत्यधिक अनुकूलन योग्य AI शोध एजेंट है, जो बाहरी स्रोतों से जुड़ा है जैसे सर्च इंजन (SearxNG, Tavily, LinkUp), Google Drive, Slack, Microsoft Teams, Linear, Jira, ClickUp, Confluence, BookStack, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar, Luma, Circleback, Elasticsearch, Obsidian और भी बहुत कुछ आने वाला है।
|
||||
|
||||
<div align="center">
|
||||
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
|
||||
# वीडियो
|
||||
# वीडियो
|
||||
|
||||
https://github.com/user-attachments/assets/cc0c84d3-1f2f-4f7a-b519-2ecce22310b1
|
||||
|
||||
|
||||
## पॉडकास्ट नमूना
|
||||
|
||||
https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
|
||||
|
||||
|
||||
## SurfSense का उपयोग कैसे करें
|
||||
|
||||
### Cloud
|
||||
|
||||
1. [surfsense.com](https://www.surfsense.com) पर जाएं और लॉगिन करें।
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/b4df25fe-db5a-43c2-9462-b75cf7f1b707" alt="लॉगिन" /></p>
|
||||
|
||||
2. अपने कनेक्टर जोड़ें और सिंक करें। कनेक्टर्स को अपडेट रखने के लिए आवधिक सिंकिंग सक्षम करें।
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/59da61d7-da05-4576-b7c0-dbc09f5985e8" alt="कनेक्टर्स" /></p>
|
||||
|
||||
3. जब तक कनेक्टर्स का डेटा इंडेक्स हो रहा है, दस्तावेज़ अपलोड करें।
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/d1e8b2e2-9eac-41d8-bdc0-f0cdc405d128" alt="दस्तावेज़ अपलोड करें" /></p>
|
||||
|
||||
4. सब कुछ इंडेक्स हो जाने के बाद, कुछ भी पूछें (उपयोग के मामले):
|
||||
|
||||
- बेसिक सर्च और उद्धरण
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/81e797a1-e01a-4003-8e60-0a0b3a9789df" alt="सर्च और उद्धरण" /></p>
|
||||
|
||||
- दस्तावेज़ मेंशन QNA
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/be958295-0a8c-4707-998c-9fe1f1c007be" alt="दस्तावेज़ मेंशन QNA" /></p>
|
||||
|
||||
- रिपोर्ट जनरेशन और एक्सपोर्ट (फ़िलहाल PDF, DOCX)
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/9836b7d6-57c9-4951-b61c-68202c9b6ace" alt="रिपोर्ट जनरेशन" /></p>
|
||||
|
||||
- पॉडकास्ट जनरेशन
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/58c9b057-8848-4e81-aaba-d2c617985d8c" alt="पॉडकास्ट जनरेशन" /></p>
|
||||
|
||||
- इमेज जनरेशन
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/25f94cb3-18f8-4854-afd9-27b7bfd079cb" alt="इमेज जनरेशन" /></p>
|
||||
|
||||
|
||||
### सेल्फ-होस्टेड
|
||||
|
||||
पूर्ण डेटा नियंत्रण और गोपनीयता के लिए SurfSense को अपने स्वयं के बुनियादी ढांचे पर चलाएं।
|
||||
|
||||
**त्वरित शुरुआत (Docker एक कमांड में):**
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
शुरू करने के बाद, अपने ब्राउज़र में [http://localhost:3000](http://localhost:3000) खोलें।
|
||||
|
||||
Docker Compose, मैनुअल इंस्टॉलेशन और अन्य डिप्लॉयमेंट विकल्पों के लिए, [डॉक्स](https://www.surfsense.com/docs/) देखें।
|
||||
|
||||
## प्रमुख विशेषताएं
|
||||
|
||||
### 💡 **विचार**:
|
||||
- NotebookLM, Perplexity और Glean का ओपन सोर्स विकल्प। किसी भी LLM को अपने आंतरिक ज्ञान स्रोतों से जोड़ें और अपनी टीम के साथ रीयल-टाइम में सहयोग करें।
|
||||
### 📁 **कई फ़ाइल फ़ॉर्मेट अपलोड सपोर्ट**
|
||||
- अपनी व्यक्तिगत फ़ाइलों *(दस्तावेज़, चित्र, वीडियो और **50+ फ़ाइल एक्सटेंशन** का समर्थन)* से सामग्री को अपने व्यक्तिगत ज्ञान आधार में सहेजें।
|
||||
### 🔍 **शक्तिशाली खोज**
|
||||
- अपनी सहेजी गई सामग्री में कुछ भी तुरंत खोजें या शोध करें।
|
||||
### 💬 **अपनी सहेजी गई सामग्री के साथ चैट करें**
|
||||
- प्राकृतिक भाषा में बातचीत करें और उद्धृत उत्तर प्राप्त करें।
|
||||
### 📄 **उद्धृत उत्तर**
|
||||
- Perplexity की तरह उद्धृत उत्तर प्राप्त करें।
|
||||
### 🧩 **सार्वभौमिक अनुकूलता**
|
||||
- OpenAI स्पेक और LiteLLM के माध्यम से लगभग किसी भी इंफ्रेंस प्रदाता को कनेक्ट करें।
|
||||
### 🔔 **गोपनीयता और स्थानीय LLM सपोर्ट**
|
||||
- vLLM और Ollama जैसे स्थानीय LLMs के साथ बेहतरीन काम करता है।
|
||||
### 🏠 **सेल्फ-होस्ट करने योग्य**
|
||||
- ओपन सोर्स और स्थानीय रूप से तैनात करना आसान।
|
||||
### 👥 **RBAC के साथ टीम सहयोग**
|
||||
- सर्च स्पेस के लिए भूमिका-आधारित एक्सेस नियंत्रण
|
||||
- अनुकूलन योग्य भूमिकाओं (मालिक, एडमिन, संपादक, दर्शक) के साथ टीम सदस्यों को आमंत्रित करें
|
||||
- दस्तावेज़ों, चैट, कनेक्टर और सेटिंग्स के लिए विस्तृत अनुमतियां
|
||||
- अपने संगठन के भीतर सुरक्षित रूप से ज्ञान आधार साझा करें
|
||||
- टीम चैट रीयल-टाइम में अपडेट होते हैं और कमेंट थ्रेड में "चैट के बारे में चैट" करें
|
||||
### 🎙️ पॉडकास्ट
|
||||
- अत्यंत तेज़ पॉडकास्ट जनरेशन एजेंट। (20 सेकंड से कम में 3 मिनट का पॉडकास्ट बनाता है।)
|
||||
- अपनी चैट बातचीत को आकर्षक ऑडियो सामग्री में बदलें
|
||||
- स्थानीय TTS प्रदाताओं का समर्थन (Kokoro TTS)
|
||||
- कई TTS प्रदाताओं का समर्थन (OpenAI, Azure, Google Vertex AI)
|
||||
| विशेषता | विवरण |
|
||||
|----------|--------|
|
||||
| OSS विकल्प | रीयल-टाइम टीम सहयोग के साथ NotebookLM, Perplexity और Glean का सीधा प्रतिस्थापन |
|
||||
| 50+ फ़ाइल फ़ॉर्मेट | LlamaCloud, Unstructured या Docling (लोकल) के माध्यम से दस्तावेज़, चित्र, वीडियो अपलोड करें |
|
||||
| हाइब्रिड सर्च | हायरार्किकल इंडाइसेस और Reciprocal Rank Fusion के साथ सिमैंटिक + फुल टेक्स्ट सर्च |
|
||||
| उद्धृत उत्तर | अपने ज्ञान आधार के साथ चैट करें और Perplexity शैली के उद्धृत उत्तर पाएं |
|
||||
| डीप एजेंट आर्किटेक्चर | [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) द्वारा संचालित, योजना, सब-एजेंट और फ़ाइल सिस्टम एक्सेस |
|
||||
| यूनिवर्सल LLM सपोर्ट | 100+ LLMs, 6000+ एम्बेडिंग मॉडल, सभी प्रमुख रीरैंकर्स OpenAI spec और LiteLLM के माध्यम से |
|
||||
| प्राइवेसी फर्स्ट | पूर्ण लोकल LLM सपोर्ट (vLLM, Ollama) आपका डेटा आपका रहता है |
|
||||
| टीम सहयोग | मालिक / एडमिन / संपादक / दर्शक भूमिकाओं के साथ RBAC, रीयल-टाइम चैट और कमेंट थ्रेड |
|
||||
| पॉडकास्ट जनरेशन | 20 सेकंड से कम में 3 मिनट का पॉडकास्ट; कई TTS प्रदाता (OpenAI, Azure, Kokoro) |
|
||||
| ब्राउज़र एक्सटेंशन | किसी भी वेबपेज को सहेजने के लिए क्रॉस-ब्राउज़र एक्सटेंशन, प्रमाणीकरण सुरक्षित पेज सहित |
|
||||
| 25+ कनेक्टर्स | सर्च इंजन, Google Drive, Slack, Teams, Jira, Notion, GitHub, Discord और [अधिक](#बाहरी-स्रोत) |
|
||||
| सेल्फ-होस्ट करने योग्य | ओपन सोर्स, Docker एक कमांड या प्रोडक्शन के लिए पूर्ण Docker Compose |
|
||||
|
||||
### 🤖 **डीप एजेंट आर्किटेक्चर**
|
||||
- [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) द्वारा संचालित - ऐसे एजेंट जो योजना बना सकते हैं, सब-एजेंट का उपयोग कर सकते हैं, और जटिल कार्यों के लिए फ़ाइल सिस्टम का लाभ उठा सकते हैं।
|
||||
<details>
|
||||
<summary><b>बाहरी स्रोतों की पूरी सूची</b></summary>
|
||||
<a id="बाहरी-स्रोत"></a>
|
||||
|
||||
### 📊 **उन्नत RAG तकनीकें**
|
||||
- 100+ LLMs का समर्थन
|
||||
- 6000+ एम्बेडिंग मॉडल का समर्थन
|
||||
- सभी प्रमुख रीरैंकर्स का समर्थन (Pinecone, Cohere, Flashrank आदि)
|
||||
- हायरार्किकल इंडाइसेस का उपयोग (2 स्तरीय RAG सेटअप)
|
||||
- हाइब्रिड सर्च का उपयोग (सिमैंटिक + फुल टेक्स्ट सर्च, Reciprocal Rank Fusion के साथ)
|
||||
|
||||
### ℹ️ **बाहरी स्रोत**
|
||||
- सर्च इंजन (Tavily, LinkUp)
|
||||
- SearxNG (सेल्फ-होस्टेड इंस्टेंस)
|
||||
- Google Drive
|
||||
- Slack
|
||||
- Microsoft Teams
|
||||
- Linear
|
||||
- Jira
|
||||
- ClickUp
|
||||
- Confluence
|
||||
- BookStack
|
||||
- Notion
|
||||
- Gmail
|
||||
- YouTube वीडियो
|
||||
- GitHub
|
||||
- Discord
|
||||
- Airtable
|
||||
- Google Calendar
|
||||
- Luma
|
||||
- Circleback
|
||||
- Elasticsearch
|
||||
- Obsidian
|
||||
- और भी बहुत कुछ आने वाला है.....
|
||||
|
||||
## 📄 **समर्थित फ़ाइल एक्सटेंशन**
|
||||
|
||||
| ETL सेवा | फ़ॉर्मेट | नोट्स |
|
||||
|-----------|----------|-------|
|
||||
| **LlamaCloud** | 50+ फ़ॉर्मेट | दस्तावेज़, प्रस्तुतियां, स्प्रेडशीट, चित्र |
|
||||
| **Unstructured** | 34+ फ़ॉर्मेट | मुख्य फ़ॉर्मेट + ईमेल समर्थन |
|
||||
| **Docling** | मुख्य फ़ॉर्मेट | स्थानीय प्रोसेसिंग, API कुंजी की आवश्यकता नहीं |
|
||||
|
||||
**ऑडियो/वीडियो** (STT सेवा के माध्यम से): `.mp3`, `.wav`, `.mp4`, `.webm`, आदि।
|
||||
|
||||
### 🔖 क्रॉस-ब्राउज़र एक्सटेंशन
|
||||
- SurfSense एक्सटेंशन का उपयोग किसी भी वेबपेज को सहेजने के लिए किया जा सकता है।
|
||||
- इसका मुख्य उपयोग प्रमाणीकरण द्वारा संरक्षित वेबपेजों को सहेजना है।
|
||||
सर्च इंजन (Tavily, LinkUp) · SearxNG · Google Drive · Slack · Microsoft Teams · Linear · Jira · ClickUp · Confluence · BookStack · Notion · Gmail · YouTube वीडियो · GitHub · Discord · Airtable · Google Calendar · Luma · Circleback · Elasticsearch · Obsidian, और भी बहुत कुछ आने वाला है।
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## फ़ीचर अनुरोध और भविष्य
|
||||
|
|
@ -126,120 +126,29 @@ https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
|
|||
|
||||
[SurfSense Discord](https://discord.gg/ejRNvftDp9) में शामिल हों और SurfSense के भविष्य को आकार देने में मदद करें!
|
||||
|
||||
## 🚀 रोडमैप
|
||||
## रोडमैप
|
||||
|
||||
हमारे विकास की प्रगति और आने वाली सुविधाओं से अपडेट रहें!
|
||||
हमारा सार्वजनिक रोडमैप देखें और अपने विचार या फ़ीडबैक दें:
|
||||
|
||||
**📋 रोडमैप चर्चा:** [SurfSense 2025-2026 Roadmap: Deep Agents, Real-Time Collaboration & MCP Servers](https://github.com/MODSetter/SurfSense/discussions/565)
|
||||
**रोडमैप चर्चा:** [SurfSense 2026 Roadmap](https://github.com/MODSetter/SurfSense/discussions/565)
|
||||
|
||||
**📊 कानबन बोर्ड:** [SurfSense Project Board](https://github.com/users/MODSetter/projects/3)
|
||||
**कानबन बोर्ड:** [SurfSense Project Board](https://github.com/users/MODSetter/projects/3)
|
||||
|
||||
|
||||
## कैसे शुरू करें?
|
||||
## योगदान करें
|
||||
|
||||
### Docker के साथ त्वरित शुरुआत 🐳
|
||||
|
||||
> [!TIP]
|
||||
> प्रोडक्शन डिप्लॉयमेंट के लिए, पूर्ण [Docker Compose सेटअप](https://www.surfsense.com/docs/docker-installation) का उपयोग करें जो अधिक नियंत्रण और स्केलेबिलिटी प्रदान करता है।
|
||||
|
||||
**Linux/macOS:**
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
**Windows (PowerShell):**
|
||||
|
||||
```powershell
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 `
|
||||
-v surfsense-data:/data `
|
||||
--name surfsense `
|
||||
--restart unless-stopped `
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
**कस्टम कॉन्फ़िगरेशन के साथ:**
|
||||
|
||||
आप `-e` फ़्लैग का उपयोग करके कोई भी एन्वायरनमेंट वेरिएबल पास कर सकते हैं:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
-e EMBEDDING_MODEL=openai://text-embedding-ada-002 \
|
||||
-e OPENAI_API_KEY=your_openai_api_key \
|
||||
-e AUTH_TYPE=GOOGLE \
|
||||
-e GOOGLE_OAUTH_CLIENT_ID=your_google_client_id \
|
||||
-e GOOGLE_OAUTH_CLIENT_SECRET=your_google_client_secret \
|
||||
-e ETL_SERVICE=LLAMACLOUD \
|
||||
-e LLAMA_CLOUD_API_KEY=your_llama_cloud_key \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> - यदि HTTPS के साथ रिवर्स प्रॉक्सी के पीछे डिप्लॉय कर रहे हैं, तो `-e BACKEND_URL=https://api.yourdomain.com` जोड़ें
|
||||
|
||||
शुरू करने के बाद, SurfSense तक पहुंचें:
|
||||
- **फ्रंटएंड**: [http://localhost:3000](http://localhost:3000)
|
||||
- **बैकएंड API**: [http://localhost:8000](http://localhost:8000)
|
||||
- **API डॉक्स**: [http://localhost:8000/docs](http://localhost:8000/docs)
|
||||
- **Electric-SQL**: [http://localhost:5133](http://localhost:5133)
|
||||
|
||||
**उपयोगी कमांड:**
|
||||
|
||||
```bash
|
||||
docker logs -f surfsense # लॉग देखें
|
||||
docker stop surfsense # रोकें
|
||||
docker start surfsense # शुरू करें
|
||||
docker rm surfsense # हटाएं (डेटा वॉल्यूम में सुरक्षित रहता है)
|
||||
```
|
||||
|
||||
### इंस्टॉलेशन विकल्प
|
||||
|
||||
SurfSense शुरू करने के लिए कई विकल्प प्रदान करता है:
|
||||
|
||||
1. **[SurfSense Cloud](https://www.surfsense.com/login)** - बिना किसी सेटअप के SurfSense आज़माने का सबसे आसान तरीका।
|
||||
- इंस्टॉलेशन की आवश्यकता नहीं
|
||||
- सभी सुविधाओं तक तत्काल पहुंच
|
||||
- जल्दी शुरू करने के लिए बिल्कुल सही
|
||||
|
||||
2. **Docker त्वरित शुरुआत (ऊपर)** - एक ही कमांड से SurfSense को स्थानीय रूप से चलाएं।
|
||||
- PostgreSQL, Redis और सभी सेवाओं के साथ ऑल-इन-वन इमेज
|
||||
- मूल्यांकन, विकास और छोटे डिप्लॉयमेंट के लिए बिल्कुल सही
|
||||
- Docker वॉल्यूम के माध्यम से डेटा पर्सिस्ट
|
||||
|
||||
3. **[Docker Compose (प्रोडक्शन)](https://www.surfsense.com/docs/docker-installation)** - अलग-अलग सेवाओं के साथ पूर्ण स्टैक डिप्लॉयमेंट।
|
||||
- वेब UI के माध्यम से डेटाबेस प्रबंधन के लिए pgAdmin शामिल
|
||||
- `.env` फ़ाइल के माध्यम से एन्वायरनमेंट वेरिएबल कस्टमाइज़ेशन का समर्थन
|
||||
- लचीले डिप्लॉयमेंट विकल्प (पूर्ण स्टैक या केवल मुख्य सेवाएं)
|
||||
- सेवाओं के स्वतंत्र स्केलिंग के साथ प्रोडक्शन के लिए बेहतर
|
||||
|
||||
4. **[मैनुअल इंस्टॉलेशन](https://www.surfsense.com/docs/manual-installation)** - उन उपयोगकर्ताओं के लिए जो अपने सेटअप पर अधिक नियंत्रण चाहते हैं या अपने डिप्लॉयमेंट को कस्टमाइज़ करना चाहते हैं।
|
||||
|
||||
Docker और मैनुअल इंस्टॉलेशन गाइड में Windows, macOS और Linux के लिए विस्तृत OS-विशिष्ट निर्देश शामिल हैं।
|
||||
|
||||
सेल्फ-होस्टिंग इंस्टॉलेशन से पहले, [पूर्वापेक्षा सेटअप चरण](https://www.surfsense.com/docs/) पूरा करना सुनिश्चित करें, जिसमें शामिल हैं:
|
||||
- प्रमाणीकरण सेटअप (वैकल्पिक - डिफ़ॉल्ट LOCAL प्रमाणीकरण)
|
||||
- **फ़ाइल प्रोसेसिंग ETL सेवा** (वैकल्पिक - डिफ़ॉल्ट Docling):
|
||||
- Docling (डिफ़ॉल्ट, स्थानीय प्रोसेसिंग, API कुंजी की आवश्यकता नहीं, PDF, Office दस्तावेज़, चित्र, HTML, CSV का समर्थन)
|
||||
- Unstructured.io API कुंजी (34+ फ़ॉर्मेट का समर्थन)
|
||||
- LlamaIndex API कुंजी (उन्नत पार्सिंग, 50+ फ़ॉर्मेट का समर्थन)
|
||||
- आपके उपयोग के अनुसार अन्य API कुंजियां
|
||||
|
||||
|
||||
## योगदान करें
|
||||
|
||||
योगदान का बहुत स्वागत है! योगदान एक ⭐ जितना छोटा हो सकता है या issues खोजना और बनाना भी।
|
||||
योगदान का बहुत स्वागत है! योगदान एक स्टार जितना छोटा हो सकता है या issues खोजना और बनाना भी।
|
||||
बैकएंड की फ़ाइन-ट्यूनिंग हमेशा वांछित है।
|
||||
|
||||
विस्तृत योगदान दिशानिर्देशों के लिए, कृपया हमारी [CONTRIBUTING.md](CONTRIBUTING.md) फ़ाइल देखें।
|
||||
|
||||
## योगदानकर्ता
|
||||
|
||||
<a href="https://github.com/MODSetter/SurfSense/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=MODSetter/SurfSense" />
|
||||
</a>
|
||||
|
||||
## Star इतिहास
|
||||
|
||||
<a href="https://www.star-history.com/#MODSetter/SurfSense&Date">
|
||||
|
|
|
|||
265
README.md
|
|
@ -15,6 +15,9 @@
|
|||
|
||||
[English](README.md) | [Español](README.es.md) | [Português](README.pt-BR.md) | [हिन्दी](README.hi.md) | [简体中文](README.zh-CN.md)
|
||||
|
||||
</div>
|
||||
<div align="center">
|
||||
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
# SurfSense
|
||||
|
|
@ -22,101 +25,98 @@ Connect any LLM to your internal knowledge sources and chat with it in real time
|
|||
|
||||
SurfSense is a highly customizable AI research agent, connected to external sources such as Search Engines (SearxNG, Tavily, LinkUp), Google Drive, Slack, Microsoft Teams, Linear, Jira, ClickUp, Confluence, BookStack, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar, Luma, Circleback, Elasticsearch, Obsidian and more to come.
|
||||
|
||||
<div align="center">
|
||||
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
|
||||
# Video
|
||||
|
||||
https://github.com/user-attachments/assets/cc0c84d3-1f2f-4f7a-b519-2ecce22310b1
|
||||
|
||||
|
||||
## Podcast Sample
|
||||
|
||||
https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
|
||||
|
||||
|
||||
## How to Use SurfSense
|
||||
|
||||
### Cloud
|
||||
|
||||
1. Go to [surfsense.com](https://www.surfsense.com) and login.
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/b4df25fe-db5a-43c2-9462-b75cf7f1b707" alt="Login" /></p>
|
||||
|
||||
2. Connect your connectors and sync. Enable periodic syncing to keep connectors synced.
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/59da61d7-da05-4576-b7c0-dbc09f5985e8" alt="Connectors" /></p>
|
||||
|
||||
3. Till connectors data index, upload Documents.
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/d1e8b2e2-9eac-41d8-bdc0-f0cdc405d128" alt="Upload Documents" /></p>
|
||||
|
||||
4. Once everything is indexed, Ask Away (Use Cases):
|
||||
|
||||
- Basic search and citation
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/81e797a1-e01a-4003-8e60-0a0b3a9789df" alt="Search and Citation" /></p>
|
||||
|
||||
- Document Mention QNA
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/be958295-0a8c-4707-998c-9fe1f1c007be" alt="Document Mention QNA" /></p>
|
||||
|
||||
- Report Generations and Exports (PDF, DOCX for now)
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/9836b7d6-57c9-4951-b61c-68202c9b6ace" alt="Report Generation" /></p>
|
||||
|
||||
- Podcast Generations
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/58c9b057-8848-4e81-aaba-d2c617985d8c" alt="Podcast Generation" /></p>
|
||||
|
||||
- Image Generations
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/25f94cb3-18f8-4854-afd9-27b7bfd079cb" alt="Image Generation" /></p>
|
||||
|
||||
|
||||
### Self Hosted
|
||||
|
||||
Run SurfSense on your own infrastructure for full data control and privacy.
|
||||
|
||||
**Quick Start (Docker one-liner):**
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
After starting, open [http://localhost:3000](http://localhost:3000) in your browser.
|
||||
|
||||
For Docker Compose, manual installation, and other deployment options, check the [docs](https://www.surfsense.com/docs/).
|
||||
|
||||
## Key Features
|
||||
|
||||
### 💡 **Idea**:
|
||||
- Open source alternative to NotebookLM, Perplexity, and Glean. Connect any LLM to your internal knowledge sources and collaborate with your team in real time.
|
||||
### 📁 **Multiple File Format Uploading Support**
|
||||
- Save content from your own personal files *(Documents, images, videos and supports **50+ file extensions**)* to your own personal knowledge base .
|
||||
### 🔍 **Powerful Search**
|
||||
- Quickly research or find anything in your saved content .
|
||||
### 💬 **Chat with your Saved Content**
|
||||
- Interact in Natural Language and get cited answers.
|
||||
### 📄 **Cited Answers**
|
||||
- Get Cited answers just like Perplexity.
|
||||
### 🧩 **Universal Compatibility**
|
||||
- Connect virtually any inference provider via the OpenAI spec and LiteLLM.
|
||||
### 🔔 **Privacy & Local LLM Support**
|
||||
- Works Flawlessly with local LLMs like vLLM and Ollama.
|
||||
### 🏠 **Self Hostable**
|
||||
- Open source and easy to deploy locally.
|
||||
### 👥 **Team Collaboration with RBAC**
|
||||
- Role-Based Access Control for Search Spaces
|
||||
- Invite team members with customizable roles (Owner, Admin, Editor, Viewer)
|
||||
- Granular permissions for documents, chats, connectors, and settings
|
||||
- Share knowledge bases securely within your organization
|
||||
- Team chats update in real-time and "Chat about the chat" in comment threads
|
||||
### 🎙️ Podcasts
|
||||
- Blazingly fast podcast generation agent. (Creates a 3-minute podcast in under 20 seconds.)
|
||||
- Convert your chat conversations into engaging audio content
|
||||
- Support for local TTS providers (Kokoro TTS)
|
||||
- Support for multiple TTS providers (OpenAI, Azure, Google Vertex AI)
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| OSS Alternative | Drop in replacement for NotebookLM, Perplexity, and Glean with real time team collaboration |
|
||||
| 50+ File Formats | Upload documents, images, videos via LlamaCloud, Unstructured, or Docling (local) |
|
||||
| Hybrid Search | Semantic + Full Text Search with Hierarchical Indices and Reciprocal Rank Fusion |
|
||||
| Cited Answers | Chat with your knowledge base and get Perplexity style cited responses |
|
||||
| Deep Agent Architecture | Powered by [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) planning, subagents, and file system access |
|
||||
| Universal LLM Support | 100+ LLMs, 6000+ embedding models, all major rerankers via OpenAI spec & LiteLLM |
|
||||
| Privacy First | Full local LLM support (vLLM, Ollama) your data stays yours |
|
||||
| Team Collaboration | RBAC with Owner / Admin / Editor / Viewer roles, real time chat & comment threads |
|
||||
| Podcast Generation | 3 min podcast in under 20 seconds; multiple TTS providers (OpenAI, Azure, Kokoro) |
|
||||
| Browser Extension | Cross browser extension to save any webpage, including auth protected pages |
|
||||
| 25+ Connectors | Search Engines, Google Drive, Slack, Teams, Jira, Notion, GitHub, Discord & [more](#external-sources) |
|
||||
| Self Hostable | Open source, Docker one liner or full Docker Compose for production |
|
||||
|
||||
### 🤖 **Deep Agent Architecture**
|
||||
- Powered by [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) - agents that can plan, use subagents, and leverage file systems for complex tasks.
|
||||
<details>
|
||||
<summary><b>Full list of External Sources</b></summary>
|
||||
<a id="external-sources"></a>
|
||||
|
||||
### 📊 **Advanced RAG Techniques**
|
||||
- Supports 100+ LLM's
|
||||
- Supports 6000+ Embedding Models.
|
||||
- Supports all major Rerankers (Pinecone, Cohere, Flashrank etc)
|
||||
- Uses Hierarchical Indices (2 tiered RAG setup).
|
||||
- Utilizes Hybrid Search (Semantic + Full Text Search combined with Reciprocal Rank Fusion).
|
||||
|
||||
### ℹ️ **External Sources**
|
||||
- Search Engines (Tavily, LinkUp)
|
||||
- SearxNG (self-hosted instances)
|
||||
- Google Drive
|
||||
- Slack
|
||||
- Microsoft Teams
|
||||
- Linear
|
||||
- Jira
|
||||
- ClickUp
|
||||
- Confluence
|
||||
- BookStack
|
||||
- Notion
|
||||
- Gmail
|
||||
- Youtube Videos
|
||||
- GitHub
|
||||
- Discord
|
||||
- Airtable
|
||||
- Google Calendar
|
||||
- Luma
|
||||
- Circleback
|
||||
- Elasticsearch
|
||||
- Obsidian
|
||||
- and more to come.....
|
||||
|
||||
## 📄 **Supported File Extensions**
|
||||
|
||||
| ETL Service | Formats | Notes |
|
||||
|-------------|---------|-------|
|
||||
| **LlamaCloud** | 50+ formats | Documents, presentations, spreadsheets, images |
|
||||
| **Unstructured** | 34+ formats | Core formats + email support |
|
||||
| **Docling** | Core formats | Local processing, no API key required |
|
||||
|
||||
**Audio/Video** (via STT Service): `.mp3`, `.wav`, `.mp4`, `.webm`, etc.
|
||||
|
||||
### 🔖 Cross Browser Extension
|
||||
- The SurfSense extension can be used to save any webpage you like.
|
||||
- Its main usecase is to save any webpages protected beyond authentication.
|
||||
Search Engines (Tavily, LinkUp) · SearxNG · Google Drive · Slack · Microsoft Teams · Linear · Jira · ClickUp · Confluence · BookStack · Notion · Gmail · YouTube Videos · GitHub · Discord · Airtable · Google Calendar · Luma · Circleback · Elasticsearch · Obsidian, and more to come.
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## FEATURE REQUESTS AND FUTURE
|
||||
|
|
@ -126,120 +126,29 @@ https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
|
|||
|
||||
Join the [SurfSense Discord](https://discord.gg/ejRNvftDp9) and help shape the future of SurfSense!
|
||||
|
||||
## 🚀 Roadmap
|
||||
## Roadmap
|
||||
|
||||
Stay up to date with our development progress and upcoming features!
|
||||
Check out our public roadmap and contribute your ideas or feedback:
|
||||
|
||||
**📋 Roadmap Discussion:** [SurfSense 2025-2026 Roadmap: Deep Agents, Real-Time Collaboration & MCP Servers](https://github.com/MODSetter/SurfSense/discussions/565)
|
||||
**Roadmap Discussion:** [SurfSense 2026 Roadmap](https://github.com/MODSetter/SurfSense/discussions/565)
|
||||
|
||||
**📊 Kanban Board:** [SurfSense Project Board](https://github.com/users/MODSetter/projects/3)
|
||||
|
||||
|
||||
## How to get started?
|
||||
|
||||
### Quick Start with Docker 🐳
|
||||
|
||||
> [!TIP]
|
||||
> For production deployments, use the full [Docker Compose setup](https://www.surfsense.com/docs/docker-installation) which offers more control and scalability.
|
||||
|
||||
**Linux/macOS:**
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
**Windows (PowerShell):**
|
||||
|
||||
```powershell
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 `
|
||||
-v surfsense-data:/data `
|
||||
--name surfsense `
|
||||
--restart unless-stopped `
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
**With Custom Configuration:**
|
||||
|
||||
You can pass any environment variable using `-e` flags:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
-e EMBEDDING_MODEL=openai://text-embedding-ada-002 \
|
||||
-e OPENAI_API_KEY=your_openai_api_key \
|
||||
-e AUTH_TYPE=GOOGLE \
|
||||
-e GOOGLE_OAUTH_CLIENT_ID=your_google_client_id \
|
||||
-e GOOGLE_OAUTH_CLIENT_SECRET=your_google_client_secret \
|
||||
-e ETL_SERVICE=LLAMACLOUD \
|
||||
-e LLAMA_CLOUD_API_KEY=your_llama_cloud_key \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> - If deploying behind a reverse proxy with HTTPS, add `-e BACKEND_URL=https://api.yourdomain.com`
|
||||
|
||||
After starting, access SurfSense at:
|
||||
- **Frontend**: [http://localhost:3000](http://localhost:3000)
|
||||
- **Backend API**: [http://localhost:8000](http://localhost:8000)
|
||||
- **API Docs**: [http://localhost:8000/docs](http://localhost:8000/docs)
|
||||
- **Electric-SQL**: [http://localhost:5133](http://localhost:5133)
|
||||
|
||||
**Useful Commands:**
|
||||
|
||||
```bash
|
||||
docker logs -f surfsense # View logs
|
||||
docker stop surfsense # Stop
|
||||
docker start surfsense # Start
|
||||
docker rm surfsense # Remove (data preserved in volume)
|
||||
```
|
||||
|
||||
### Installation Options
|
||||
|
||||
SurfSense provides multiple options to get started:
|
||||
|
||||
1. **[SurfSense Cloud](https://www.surfsense.com/login)** - The easiest way to try SurfSense without any setup.
|
||||
- No installation required
|
||||
- Instant access to all features
|
||||
- Perfect for getting started quickly
|
||||
|
||||
2. **Quick Start Docker (Above)** - Single command to get SurfSense running locally.
|
||||
- All-in-one image with PostgreSQL, Redis, and all services bundled
|
||||
- Perfect for evaluation, development, and small deployments
|
||||
- Data persisted via Docker volume
|
||||
|
||||
3. **[Docker Compose (Production)](https://www.surfsense.com/docs/docker-installation)** - Full stack deployment with separate services.
|
||||
- Includes pgAdmin for database management through a web UI
|
||||
- Supports environment variable customization via `.env` file
|
||||
- Flexible deployment options (full stack or core services only)
|
||||
- Better for production with separate scaling of services
|
||||
|
||||
4. **[Manual Installation](https://www.surfsense.com/docs/manual-installation)** - For users who prefer more control over their setup or need to customize their deployment.
|
||||
|
||||
Docker and manual installation guides include detailed OS-specific instructions for Windows, macOS, and Linux.
|
||||
|
||||
Before self-hosting installation, make sure to complete the [prerequisite setup steps](https://www.surfsense.com/docs/) including:
|
||||
- Auth setup (optional - defaults to LOCAL auth)
|
||||
- **File Processing ETL Service** (optional - defaults to Docling):
|
||||
- Docling (default, local processing, no API key required, supports PDF, Office docs, images, HTML, CSV)
|
||||
- Unstructured.io API key (supports 34+ formats)
|
||||
- LlamaIndex API key (enhanced parsing, supports 50+ formats)
|
||||
- Other API keys as needed for your use case
|
||||
**Kanban Board:** [SurfSense Project Board](https://github.com/users/MODSetter/projects/3)
|
||||
|
||||
|
||||
## Contribute
|
||||
|
||||
Contributions are very welcome! A contribution can be as small as a ⭐ or even finding and creating issues.
|
||||
Contributions are very welcome! A contribution can be as small as a star or even finding and creating issues.
|
||||
Fine-tuning the Backend is always desired.
|
||||
|
||||
For detailed contribution guidelines, please see our [CONTRIBUTING.md](CONTRIBUTING.md) file.
|
||||
|
||||
## Contributors
|
||||
|
||||
<a href="https://github.com/MODSetter/SurfSense/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=MODSetter/SurfSense" />
|
||||
</a>
|
||||
|
||||
## Star History
|
||||
|
||||
<a href="https://www.star-history.com/#MODSetter/SurfSense&Date">
|
||||
|
|
|
|||
269
README.pt-BR.md
|
|
@ -15,6 +15,9 @@
|
|||
|
||||
[English](README.md) | [Español](README.es.md) | [Português](README.pt-BR.md) | [हिन्दी](README.hi.md) | [简体中文](README.zh-CN.md)
|
||||
|
||||
</div>
|
||||
<div align="center">
|
||||
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
# SurfSense
|
||||
|
|
@ -22,101 +25,98 @@ Conecte qualquer LLM às suas fontes de conhecimento internas e converse com ele
|
|||
|
||||
SurfSense é um agente de pesquisa de IA altamente personalizável, conectado a fontes externas como mecanismos de busca (SearxNG, Tavily, LinkUp), Google Drive, Slack, Microsoft Teams, Linear, Jira, ClickUp, Confluence, BookStack, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar, Luma, Circleback, Elasticsearch, Obsidian e mais por vir.
|
||||
|
||||
<div align="center">
|
||||
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
|
||||
# Vídeo
|
||||
# Vídeo
|
||||
|
||||
https://github.com/user-attachments/assets/cc0c84d3-1f2f-4f7a-b519-2ecce22310b1
|
||||
|
||||
|
||||
## Exemplo de Podcast
|
||||
|
||||
https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
|
||||
|
||||
|
||||
## Como Usar o SurfSense
|
||||
|
||||
### Cloud
|
||||
|
||||
1. Acesse [surfsense.com](https://www.surfsense.com) e faça login.
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/b4df25fe-db5a-43c2-9462-b75cf7f1b707" alt="Login" /></p>
|
||||
|
||||
2. Conecte seus conectores e sincronize. Ative a sincronização periódica para manter os conectores atualizados.
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/59da61d7-da05-4576-b7c0-dbc09f5985e8" alt="Conectores" /></p>
|
||||
|
||||
3. Enquanto os dados dos conectores são indexados, faça upload de documentos.
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/d1e8b2e2-9eac-41d8-bdc0-f0cdc405d128" alt="Upload de Documentos" /></p>
|
||||
|
||||
4. Quando tudo estiver indexado, pergunte o que quiser (Casos de uso):
|
||||
|
||||
- Busca básica e citações
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/81e797a1-e01a-4003-8e60-0a0b3a9789df" alt="Busca e Citação" /></p>
|
||||
|
||||
- QNA com menção de documentos
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/be958295-0a8c-4707-998c-9fe1f1c007be" alt="QNA com Menção de Documentos" /></p>
|
||||
|
||||
- Geração de relatórios e exportações (PDF, DOCX por enquanto)
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/9836b7d6-57c9-4951-b61c-68202c9b6ace" alt="Geração de Relatórios" /></p>
|
||||
|
||||
- Geração de podcasts
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/58c9b057-8848-4e81-aaba-d2c617985d8c" alt="Geração de Podcasts" /></p>
|
||||
|
||||
- Geração de imagens
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/25f94cb3-18f8-4854-afd9-27b7bfd079cb" alt="Geração de Imagens" /></p>
|
||||
|
||||
|
||||
### Auto-Hospedado
|
||||
|
||||
Execute o SurfSense na sua própria infraestrutura para controle total de dados e privacidade.
|
||||
|
||||
**Início Rápido (Docker em um único comando):**
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
Após iniciar, abra [http://localhost:3000](http://localhost:3000) no seu navegador.
|
||||
|
||||
Para Docker Compose, instalação manual e outras opções de implantação, consulte a [documentação](https://www.surfsense.com/docs/).
|
||||
|
||||
## Funcionalidades Principais
|
||||
|
||||
### 💡 **Ideia**:
|
||||
- Alternativa de código aberto ao NotebookLM, Perplexity e Glean. Conecte qualquer LLM às suas fontes de conhecimento internas e colabore com sua equipe em tempo real.
|
||||
### 📁 **Suporte a Múltiplos Formatos de Arquivo**
|
||||
- Salve conteúdo dos seus arquivos pessoais *(Documentos, imagens, vídeos e suporta **mais de 50 extensões de arquivo**)* na sua própria base de conhecimento pessoal.
|
||||
### 🔍 **Pesquisa Poderosa**
|
||||
- Pesquise ou encontre rapidamente qualquer coisa no seu conteúdo salvo.
|
||||
### 💬 **Converse com seu Conteúdo Salvo**
|
||||
- Interaja em linguagem natural e obtenha respostas com citações.
|
||||
### 📄 **Respostas com Citações**
|
||||
- Obtenha respostas com citações como no Perplexity.
|
||||
### 🧩 **Compatibilidade Universal**
|
||||
- Conecte virtualmente qualquer provedor de inferência via especificação OpenAI e LiteLLM.
|
||||
### 🔔 **Privacidade e Suporte a LLM Local**
|
||||
- Funciona perfeitamente com LLMs locais como vLLM e Ollama.
|
||||
### 🏠 **Auto-Hospedável**
|
||||
- Código aberto e fácil de implantar localmente.
|
||||
### 👥 **Colaboração em Equipe com RBAC**
|
||||
- Controle de acesso baseado em funções para Espaços de Pesquisa
|
||||
- Convide membros da equipe com funções personalizáveis (Proprietário, Admin, Editor, Visualizador)
|
||||
- Permissões granulares para documentos, chats, conectores e configurações
|
||||
- Compartilhe bases de conhecimento com segurança dentro da sua organização
|
||||
- Chats de equipe atualizam em tempo real e "Converse sobre o chat" em threads de comentários
|
||||
### 🎙️ Podcasts
|
||||
- Agente de geração de podcasts ultrarrápido. (Cria um podcast de 3 minutos em menos de 20 segundos.)
|
||||
- Converta suas conversas de chat em conteúdo de áudio envolvente
|
||||
- Suporte para provedores TTS locais (Kokoro TTS)
|
||||
- Suporte para múltiplos provedores TTS (OpenAI, Azure, Google Vertex AI)
|
||||
| Funcionalidade | Descrição |
|
||||
|----------------|-----------|
|
||||
| Alternativa OSS | Substituto direto do NotebookLM, Perplexity e Glean com colaboração em equipe em tempo real |
|
||||
| 50+ Formatos de Arquivo | Faça upload de documentos, imagens, vídeos via LlamaCloud, Unstructured ou Docling (local) |
|
||||
| Busca Híbrida | Semântica + Texto completo com Índices Hierárquicos e Reciprocal Rank Fusion |
|
||||
| Respostas com Citações | Converse com sua base de conhecimento e obtenha respostas citadas no estilo Perplexity |
|
||||
| Arquitetura de Agentes Profundos | Alimentado por [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) com planejamento, subagentes e acesso ao sistema de arquivos |
|
||||
| Suporte Universal de LLM | 100+ LLMs, 6000+ modelos de embeddings, todos os principais rerankers via OpenAI spec e LiteLLM |
|
||||
| Privacidade em Primeiro Lugar | Suporte completo a LLM local (vLLM, Ollama) seus dados ficam com você |
|
||||
| Colaboração em Equipe | RBAC com papéis de Proprietário / Admin / Editor / Visualizador, chat em tempo real e threads de comentários |
|
||||
| Geração de Podcasts | Podcast de 3 min em menos de 20 segundos; múltiplos provedores TTS (OpenAI, Azure, Kokoro) |
|
||||
| Extensão de Navegador | Extensão multi-navegador para salvar qualquer página web, incluindo páginas protegidas por autenticação |
|
||||
| 25+ Conectores | Mecanismos de busca, Google Drive, Slack, Teams, Jira, Notion, GitHub, Discord e [mais](#fontes-externas) |
|
||||
| Auto-Hospedável | Código aberto, Docker em um único comando ou Docker Compose completo para produção |
|
||||
|
||||
### 🤖 **Arquitetura de Agentes Profundos**
|
||||
- Alimentado por [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) - agentes que podem planejar, usar subagentes e aproveitar sistemas de arquivos para tarefas complexas.
|
||||
<details>
|
||||
<summary><b>Lista completa de Fontes Externas</b></summary>
|
||||
<a id="fontes-externas"></a>
|
||||
|
||||
### 📊 **Técnicas Avançadas de RAG**
|
||||
- Suporta mais de 100 LLMs
|
||||
- Suporta mais de 6000 modelos de embeddings
|
||||
- Suporta todos os principais rerankers (Pinecone, Cohere, Flashrank, etc.)
|
||||
- Utiliza índices hierárquicos (configuração RAG de 2 níveis)
|
||||
- Utiliza busca híbrida (Semântica + Texto completo combinado com Reciprocal Rank Fusion)
|
||||
|
||||
### ℹ️ **Fontes Externas**
|
||||
- Mecanismos de busca (Tavily, LinkUp)
|
||||
- SearxNG (instâncias auto-hospedadas)
|
||||
- Google Drive
|
||||
- Slack
|
||||
- Microsoft Teams
|
||||
- Linear
|
||||
- Jira
|
||||
- ClickUp
|
||||
- Confluence
|
||||
- BookStack
|
||||
- Notion
|
||||
- Gmail
|
||||
- Vídeos do YouTube
|
||||
- GitHub
|
||||
- Discord
|
||||
- Airtable
|
||||
- Google Calendar
|
||||
- Luma
|
||||
- Circleback
|
||||
- Elasticsearch
|
||||
- Obsidian
|
||||
- e mais por vir.....
|
||||
|
||||
## 📄 **Extensões de Arquivo Suportadas**
|
||||
|
||||
| Serviço ETL | Formatos | Notas |
|
||||
|-------------|----------|-------|
|
||||
| **LlamaCloud** | 50+ formatos | Documentos, apresentações, planilhas, imagens |
|
||||
| **Unstructured** | 34+ formatos | Formatos principais + suporte a e-mail |
|
||||
| **Docling** | Formatos principais | Processamento local, sem necessidade de chave API |
|
||||
|
||||
**Áudio/Vídeo** (via serviço STT): `.mp3`, `.wav`, `.mp4`, `.webm`, etc.
|
||||
|
||||
### 🔖 Extensão Multi-Navegador
|
||||
- A extensão do SurfSense pode ser usada para salvar qualquer página web que você desejar.
|
||||
- Seu principal uso é salvar páginas web protegidas por autenticação.
|
||||
Mecanismos de Busca (Tavily, LinkUp) · SearxNG · Google Drive · Slack · Microsoft Teams · Linear · Jira · ClickUp · Confluence · BookStack · Notion · Gmail · Vídeos do YouTube · GitHub · Discord · Airtable · Google Calendar · Luma · Circleback · Elasticsearch · Obsidian, e mais por vir.
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## SOLICITAÇÕES DE FUNCIONALIDADES E FUTURO
|
||||
|
|
@ -126,120 +126,29 @@ https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
|
|||
|
||||
Junte-se ao [Discord do SurfSense](https://discord.gg/ejRNvftDp9) e ajude a moldar o futuro do SurfSense!
|
||||
|
||||
## 🚀 Roadmap
|
||||
## Roadmap
|
||||
|
||||
Fique atualizado com nosso progresso de desenvolvimento e próximas funcionalidades!
|
||||
Confira nosso roadmap público e contribua com suas ideias ou feedback:
|
||||
|
||||
**📋 Discussão do Roadmap:** [SurfSense 2025-2026 Roadmap: Deep Agents, Real-Time Collaboration & MCP Servers](https://github.com/MODSetter/SurfSense/discussions/565)
|
||||
**Discussão do Roadmap:** [SurfSense 2026 Roadmap](https://github.com/MODSetter/SurfSense/discussions/565)
|
||||
|
||||
**📊 Quadro Kanban:** [SurfSense Project Board](https://github.com/users/MODSetter/projects/3)
|
||||
**Quadro Kanban:** [SurfSense Project Board](https://github.com/users/MODSetter/projects/3)
|
||||
|
||||
|
||||
## Como começar?
|
||||
## Contribuir
|
||||
|
||||
### Início Rápido com Docker 🐳
|
||||
|
||||
> [!TIP]
|
||||
> Para implantações em produção, use a configuração completa do [Docker Compose](https://www.surfsense.com/docs/docker-installation) que oferece mais controle e escalabilidade.
|
||||
|
||||
**Linux/macOS:**
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
**Windows (PowerShell):**
|
||||
|
||||
```powershell
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 `
|
||||
-v surfsense-data:/data `
|
||||
--name surfsense `
|
||||
--restart unless-stopped `
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
**Com Configuração Personalizada:**
|
||||
|
||||
Você pode passar qualquer variável de ambiente usando flags `-e`:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
-e EMBEDDING_MODEL=openai://text-embedding-ada-002 \
|
||||
-e OPENAI_API_KEY=your_openai_api_key \
|
||||
-e AUTH_TYPE=GOOGLE \
|
||||
-e GOOGLE_OAUTH_CLIENT_ID=your_google_client_id \
|
||||
-e GOOGLE_OAUTH_CLIENT_SECRET=your_google_client_secret \
|
||||
-e ETL_SERVICE=LLAMACLOUD \
|
||||
-e LLAMA_CLOUD_API_KEY=your_llama_cloud_key \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> - Se estiver implantando atrás de um proxy reverso com HTTPS, adicione `-e BACKEND_URL=https://api.yourdomain.com`
|
||||
|
||||
Após iniciar, acesse o SurfSense em:
|
||||
- **Frontend**: [http://localhost:3000](http://localhost:3000)
|
||||
- **API Backend**: [http://localhost:8000](http://localhost:8000)
|
||||
- **Documentação da API**: [http://localhost:8000/docs](http://localhost:8000/docs)
|
||||
- **Electric-SQL**: [http://localhost:5133](http://localhost:5133)
|
||||
|
||||
**Comandos Úteis:**
|
||||
|
||||
```bash
|
||||
docker logs -f surfsense # Ver logs
|
||||
docker stop surfsense # Parar
|
||||
docker start surfsense # Iniciar
|
||||
docker rm surfsense # Remover (dados preservados no volume)
|
||||
```
|
||||
|
||||
### Opções de Instalação
|
||||
|
||||
O SurfSense oferece múltiplas opções para começar:
|
||||
|
||||
1. **[SurfSense Cloud](https://www.surfsense.com/login)** - A forma mais fácil de experimentar o SurfSense sem nenhuma configuração.
|
||||
- Sem necessidade de instalação
|
||||
- Acesso instantâneo a todas as funcionalidades
|
||||
- Perfeito para começar rapidamente
|
||||
|
||||
2. **Início Rápido Docker (Acima)** - Um único comando para ter o SurfSense rodando localmente.
|
||||
- Imagem tudo-em-um com PostgreSQL, Redis e todos os serviços incluídos
|
||||
- Perfeito para avaliação, desenvolvimento e implantações pequenas
|
||||
- Dados persistidos via volume Docker
|
||||
|
||||
3. **[Docker Compose (Produção)](https://www.surfsense.com/docs/docker-installation)** - Implantação de stack completo com serviços separados.
|
||||
- Inclui pgAdmin para gerenciamento de banco de dados via interface web
|
||||
- Suporta personalização de variáveis de ambiente via arquivo `.env`
|
||||
- Opções de implantação flexíveis (stack completo ou apenas serviços principais)
|
||||
- Melhor para produção com escalamento independente de serviços
|
||||
|
||||
4. **[Instalação Manual](https://www.surfsense.com/docs/manual-installation)** - Para usuários que preferem mais controle sobre sua configuração ou precisam personalizar sua implantação.
|
||||
|
||||
Os guias de Docker e instalação manual incluem instruções detalhadas específicas para Windows, macOS e Linux.
|
||||
|
||||
Antes da instalação auto-hospedada, certifique-se de completar os [passos de configuração prévia](https://www.surfsense.com/docs/) incluindo:
|
||||
- Configuração de autenticação (opcional - padrão é autenticação LOCAL)
|
||||
- **Serviço ETL de Processamento de Arquivos** (opcional - padrão é Docling):
|
||||
- Docling (padrão, processamento local, sem necessidade de chave API, suporta PDF, documentos Office, imagens, HTML, CSV)
|
||||
- Chave API do Unstructured.io (suporta 34+ formatos)
|
||||
- Chave API do LlamaIndex (análise aprimorada, suporta 50+ formatos)
|
||||
- Outras chaves API conforme necessário para seu caso de uso
|
||||
|
||||
|
||||
## Contribuir
|
||||
|
||||
Contribuições são muito bem-vindas! Uma contribuição pode ser tão pequena quanto uma ⭐ ou até mesmo encontrar e criar issues.
|
||||
Contribuições são muito bem-vindas! Uma contribuição pode ser tão pequena quanto uma estrela ou até mesmo encontrar e criar issues.
|
||||
O ajuste fino do Backend é sempre desejado.
|
||||
|
||||
Para diretrizes detalhadas de contribuição, consulte nosso arquivo [CONTRIBUTING.md](CONTRIBUTING.md).
|
||||
|
||||
## Contribuidores
|
||||
|
||||
<a href="https://github.com/MODSetter/SurfSense/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=MODSetter/SurfSense" />
|
||||
</a>
|
||||
|
||||
## Histórico de Stars
|
||||
|
||||
<a href="https://www.star-history.com/#MODSetter/SurfSense&Date">
|
||||
|
|
|
|||
269
README.zh-CN.md
|
|
@ -15,6 +15,9 @@
|
|||
|
||||
[English](README.md) | [Español](README.es.md) | [Português](README.pt-BR.md) | [हिन्दी](README.hi.md) | [简体中文](README.zh-CN.md)
|
||||
|
||||
</div>
|
||||
<div align="center">
|
||||
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
# SurfSense
|
||||
|
|
@ -22,101 +25,98 @@
|
|||
|
||||
SurfSense 是一个高度可定制的 AI 研究助手,可以连接外部数据源,如搜索引擎(SearxNG、Tavily、LinkUp)、Google Drive、Slack、Microsoft Teams、Linear、Jira、ClickUp、Confluence、BookStack、Gmail、Notion、YouTube、GitHub、Discord、Airtable、Google Calendar、Luma、Circleback、Elasticsearch、Obsidian 等,未来还会支持更多。
|
||||
|
||||
<div align="center">
|
||||
<a href="https://trendshift.io/repositories/13606" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13606" alt="MODSetter%2FSurfSense | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</div>
|
||||
|
||||
|
||||
# 视频演示
|
||||
# 视频
|
||||
|
||||
https://github.com/user-attachments/assets/cc0c84d3-1f2f-4f7a-b519-2ecce22310b1
|
||||
|
||||
|
||||
## 播客示例
|
||||
|
||||
https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
|
||||
|
||||
|
||||
## 如何使用 SurfSense
|
||||
|
||||
### Cloud
|
||||
|
||||
1. 访问 [surfsense.com](https://www.surfsense.com) 并登录。
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/b4df25fe-db5a-43c2-9462-b75cf7f1b707" alt="登录" /></p>
|
||||
|
||||
2. 连接您的连接器并同步。启用定期同步以保持连接器数据更新。
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/59da61d7-da05-4576-b7c0-dbc09f5985e8" alt="连接器" /></p>
|
||||
|
||||
3. 在连接器数据索引期间,上传文档。
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/d1e8b2e2-9eac-41d8-bdc0-f0cdc405d128" alt="上传文档" /></p>
|
||||
|
||||
4. 一切索引完成后,尽管提问(使用场景):
|
||||
|
||||
- 基本搜索和引用
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/81e797a1-e01a-4003-8e60-0a0b3a9789df" alt="搜索和引用" /></p>
|
||||
|
||||
- 文档提及问答
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/be958295-0a8c-4707-998c-9fe1f1c007be" alt="文档提及问答" /></p>
|
||||
|
||||
- 报告生成和导出(目前支持 PDF、DOCX)
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/9836b7d6-57c9-4951-b61c-68202c9b6ace" alt="报告生成" /></p>
|
||||
|
||||
- 播客生成
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/58c9b057-8848-4e81-aaba-d2c617985d8c" alt="播客生成" /></p>
|
||||
|
||||
- 图像生成
|
||||
|
||||
<p align="center"><img src="https://github.com/user-attachments/assets/25f94cb3-18f8-4854-afd9-27b7bfd079cb" alt="图像生成" /></p>
|
||||
|
||||
|
||||
### 自托管
|
||||
|
||||
在您自己的基础设施上运行 SurfSense,实现完全的数据控制和隐私保护。
|
||||
|
||||
**快速开始(Docker 一行命令):**
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
启动后,在浏览器中打开 [http://localhost:3000](http://localhost:3000)。
|
||||
|
||||
如需 Docker Compose、手动安装及其他部署方式,请查看[文档](https://www.surfsense.com/docs/)。
|
||||
|
||||
## 核心功能
|
||||
|
||||
### 💡 **理念**:
|
||||
- NotebookLM、Perplexity 和 Glean 的开源替代方案。将任何 LLM 连接到您的内部知识源,并与团队实时协作。
|
||||
### 📁 **支持多种文件格式上传**
|
||||
- 将您个人文件中的内容(文档、图像、视频,支持 **50+ 种文件扩展名**)保存到您自己的个人知识库。
|
||||
### 🔍 **强大的搜索功能**
|
||||
- 快速研究或查找已保存内容中的任何信息。
|
||||
### 💬 **与已保存内容对话**
|
||||
- 使用自然语言交互并获得引用答案。
|
||||
### 📄 **引用答案**
|
||||
- 像 Perplexity 一样获得带引用的答案。
|
||||
### 🧩 **通用兼容性**
|
||||
- 通过 OpenAI 规范和 LiteLLM 连接几乎任何推理提供商。
|
||||
### 🔔 **隐私保护与本地 LLM 支持**
|
||||
- 完美支持 vLLM 和 Ollama 等本地大语言模型。
|
||||
### 🏠 **可自托管**
|
||||
- 开源且易于本地部署。
|
||||
### 👥 **团队协作与 RBAC**
|
||||
- 搜索空间的基于角色的访问控制
|
||||
- 使用可自定义的角色(所有者、管理员、编辑者、查看者)邀请团队成员
|
||||
- 对文档、聊天、连接器和设置的细粒度权限控制
|
||||
- 在组织内安全共享知识库
|
||||
- 团队聊天实时更新,支持评论线程中的"关于聊天的讨论"
|
||||
### 🎙️ 播客功能
|
||||
- 超快速播客生成代理(在 20 秒内创建 3 分钟播客)
|
||||
- 将聊天对话转换为引人入胜的音频内容
|
||||
- 支持本地 TTS 提供商(Kokoro TTS)
|
||||
- 支持多个 TTS 提供商(OpenAI、Azure、Google Vertex AI)
|
||||
| 功能 | 描述 |
|
||||
|------|------|
|
||||
| 开源替代方案 | 支持实时团队协作的 NotebookLM、Perplexity 和 Glean 替代品 |
|
||||
| 50+ 文件格式 | 通过 LlamaCloud、Unstructured 或 Docling(本地)上传文档、图像、视频 |
|
||||
| 混合搜索 | 语义搜索 + 全文搜索,结合层次化索引和倒数排名融合 |
|
||||
| 引用回答 | 与知识库对话,获得 Perplexity 风格的引用回答 |
|
||||
| 深度代理架构 | 基于 [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) 构建,支持规划、子代理和文件系统访问 |
|
||||
| 通用 LLM 支持 | 100+ LLM、6000+ 嵌入模型、所有主流重排序器,通过 OpenAI spec 和 LiteLLM |
|
||||
| 隐私优先 | 完整本地 LLM 支持(vLLM、Ollama),您的数据由您掌控 |
|
||||
| 团队协作 | RBAC 角色控制(所有者/管理员/编辑者/查看者),实时聊天和评论线程 |
|
||||
| 播客生成 | 20 秒内生成 3 分钟播客;多种 TTS 提供商(OpenAI、Azure、Kokoro) |
|
||||
| 浏览器扩展 | 跨浏览器扩展,保存任何网页,包括需要身份验证的页面 |
|
||||
| 25+ 连接器 | 搜索引擎、Google Drive、Slack、Teams、Jira、Notion、GitHub、Discord 等[更多](#外部数据源) |
|
||||
| 可自托管 | 开源,Docker 一行命令或完整 Docker Compose 用于生产环境 |
|
||||
|
||||
### 🤖 **深度代理架构**
|
||||
- 基于 [LangChain Deep Agents](https://docs.langchain.com/oss/python/deepagents/overview) 构建 - 支持规划、子代理和文件系统的复杂任务处理代理。
|
||||
<details>
|
||||
<summary><b>外部数据源完整列表</b></summary>
|
||||
<a id="外部数据源"></a>
|
||||
|
||||
### 📊 **先进的 RAG 技术**
|
||||
- 支持 100+ 种大语言模型
|
||||
- 支持 6000+ 种嵌入模型
|
||||
- 支持所有主流重排序器(Pinecone、Cohere、Flashrank 等)
|
||||
- 使用层次化索引(2 层 RAG 设置)
|
||||
- 利用混合搜索(语义搜索 + 全文搜索,结合倒数排名融合)
|
||||
|
||||
### ℹ️ **外部数据源**
|
||||
- 搜索引擎(Tavily、LinkUp)
|
||||
- SearxNG(自托管实例)
|
||||
- Google Drive
|
||||
- Slack
|
||||
- Microsoft Teams
|
||||
- Linear
|
||||
- Jira
|
||||
- ClickUp
|
||||
- Confluence
|
||||
- BookStack
|
||||
- Notion
|
||||
- Gmail
|
||||
- YouTube 视频
|
||||
- GitHub
|
||||
- Discord
|
||||
- Airtable
|
||||
- Google Calendar
|
||||
- Luma
|
||||
- Circleback
|
||||
- Elasticsearch
|
||||
- Obsidian
|
||||
- 更多即将推出......
|
||||
|
||||
## 📄 **支持的文件扩展名**
|
||||
|
||||
| ETL 服务 | 格式 | 说明 |
|
||||
|----------|------|------|
|
||||
| **LlamaCloud** | 50+ 种格式 | 文档、演示文稿、电子表格、图像 |
|
||||
| **Unstructured** | 34+ 种格式 | 核心格式 + 电子邮件支持 |
|
||||
| **Docling** | 核心格式 | 本地处理,无需 API 密钥 |
|
||||
|
||||
**音频/视频**(通过 STT 服务):`.mp3`、`.wav`、`.mp4`、`.webm` 等
|
||||
|
||||
### 🔖 跨浏览器扩展
|
||||
- SurfSense 扩展可用于保存您喜欢的任何网页。
|
||||
- 主要用途是保存需要身份验证的受保护网页。
|
||||
搜索引擎(Tavily、LinkUp)· SearxNG · Google Drive · Slack · Microsoft Teams · Linear · Jira · ClickUp · Confluence · BookStack · Notion · Gmail · YouTube 视频 · GitHub · Discord · Airtable · Google Calendar · Luma · Circleback · Elasticsearch · Obsidian,更多即将推出。
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## 功能请求与未来规划
|
||||
|
|
@ -126,120 +126,29 @@ https://github.com/user-attachments/assets/a0a16566-6967-4374-ac51-9b3e07fbecd7
|
|||
|
||||
加入 [SurfSense Discord](https://discord.gg/ejRNvftDp9) 一起塑造 SurfSense 的未来!
|
||||
|
||||
## 🚀 路线图
|
||||
## 路线图
|
||||
|
||||
随时了解我们的开发进度和即将推出的功能!
|
||||
查看我们的公开路线图并贡献您的想法或反馈:
|
||||
|
||||
**📋 路线图讨论:** [SurfSense 2025-2026 路线图:深度代理、实时协作与 MCP 服务器](https://github.com/MODSetter/SurfSense/discussions/565)
|
||||
**路线图讨论:** [SurfSense 2026 Roadmap](https://github.com/MODSetter/SurfSense/discussions/565)
|
||||
|
||||
**📊 看板:** [SurfSense 项目看板](https://github.com/users/MODSetter/projects/3)
|
||||
**看板:** [SurfSense Project Board](https://github.com/users/MODSetter/projects/3)
|
||||
|
||||
|
||||
## 如何开始?
|
||||
## 贡献
|
||||
|
||||
### 使用 Docker 快速开始 🐳
|
||||
|
||||
> [!TIP]
|
||||
> 对于生产部署,请使用完整的 [Docker Compose 设置](https://www.surfsense.com/docs/docker-installation),它提供更多控制和可扩展性。
|
||||
|
||||
**Linux/macOS:**
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
**Windows (PowerShell):**
|
||||
|
||||
```powershell
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 `
|
||||
-v surfsense-data:/data `
|
||||
--name surfsense `
|
||||
--restart unless-stopped `
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
**使用自定义配置:**
|
||||
|
||||
您可以使用 `-e` 标志传递任何环境变量:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:3000 -p 8000:8000 -p 5133:5133 \
|
||||
-v surfsense-data:/data \
|
||||
-e EMBEDDING_MODEL=openai://text-embedding-ada-002 \
|
||||
-e OPENAI_API_KEY=your_openai_api_key \
|
||||
-e AUTH_TYPE=GOOGLE \
|
||||
-e GOOGLE_OAUTH_CLIENT_ID=your_google_client_id \
|
||||
-e GOOGLE_OAUTH_CLIENT_SECRET=your_google_client_secret \
|
||||
-e ETL_SERVICE=LLAMACLOUD \
|
||||
-e LLAMA_CLOUD_API_KEY=your_llama_cloud_key \
|
||||
--name surfsense \
|
||||
--restart unless-stopped \
|
||||
ghcr.io/modsetter/surfsense:latest
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> - 如果部署在带有 HTTPS 的反向代理后面,请添加 `-e BACKEND_URL=https://api.yourdomain.com`
|
||||
|
||||
启动后,访问 SurfSense:
|
||||
- **前端**: [http://localhost:3000](http://localhost:3000)
|
||||
- **后端 API**: [http://localhost:8000](http://localhost:8000)
|
||||
- **API 文档**: [http://localhost:8000/docs](http://localhost:8000/docs)
|
||||
- **Electric-SQL**: [http://localhost:5133](http://localhost:5133)
|
||||
|
||||
**常用命令:**
|
||||
|
||||
```bash
|
||||
docker logs -f surfsense # 查看日志
|
||||
docker stop surfsense # 停止
|
||||
docker start surfsense # 启动
|
||||
docker rm surfsense # 删除(数据保留在卷中)
|
||||
```
|
||||
|
||||
### 安装选项
|
||||
|
||||
SurfSense 提供多种入门方式:
|
||||
|
||||
1. **[SurfSense Cloud](https://www.surfsense.com/login)** - 无需任何设置即可试用 SurfSense 的最简单方法。
|
||||
- 无需安装
|
||||
- 即时访问所有功能
|
||||
- 非常适合快速上手
|
||||
|
||||
2. **快速启动 Docker(上述方法)** - 一条命令即可在本地运行 SurfSense。
|
||||
- 一体化镜像,捆绑 PostgreSQL、Redis 和所有服务
|
||||
- 非常适合评估、开发和小型部署
|
||||
- 数据通过 Docker 卷持久化
|
||||
|
||||
3. **[Docker Compose(生产环境)](https://www.surfsense.com/docs/docker-installation)** - 使用独立服务进行完整堆栈部署。
|
||||
- 包含 pgAdmin,通过 Web UI 进行数据库管理
|
||||
- 支持通过 `.env` 文件自定义环境变量
|
||||
- 灵活的部署选项(完整堆栈或仅核心服务)
|
||||
- 更适合生产环境,支持独立扩展服务
|
||||
|
||||
4. **[手动安装](https://www.surfsense.com/docs/manual-installation)** - 适合希望对设置有更多控制或需要自定义部署的用户。
|
||||
|
||||
Docker 和手动安装指南都包含适用于 Windows、macOS 和 Linux 的详细操作系统特定说明。
|
||||
|
||||
在自托管安装之前,请确保完成[先决条件设置步骤](https://www.surfsense.com/docs/),包括:
|
||||
- 身份验证设置(可选 - 默认为 LOCAL 身份验证)
|
||||
- **文件处理 ETL 服务**(可选 - 默认为 Docling):
|
||||
- Docling(默认,本地处理,无需 API 密钥,支持 PDF、Office 文档、图像、HTML、CSV)
|
||||
- Unstructured.io API 密钥(支持 34+ 种格式)
|
||||
- LlamaIndex API 密钥(增强解析,支持 50+ 种格式)
|
||||
- 其他根据用例需要的 API 密钥
|
||||
|
||||
|
||||
## 贡献
|
||||
|
||||
非常欢迎贡献!贡献可以小到一个 ⭐,甚至是发现和创建问题。
|
||||
非常欢迎贡献!贡献可以小到一个 Star,甚至是发现和创建问题。
|
||||
后端的微调总是受欢迎的。
|
||||
|
||||
有关详细的贡献指南,请参阅我们的 [CONTRIBUTING.md](CONTRIBUTING.md) 文件。
|
||||
|
||||
## 贡献者
|
||||
|
||||
<a href="https://github.com/MODSetter/SurfSense/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=MODSetter/SurfSense" />
|
||||
</a>
|
||||
|
||||
## Star 历史
|
||||
|
||||
<a href="https://www.star-history.com/#MODSetter/SurfSense&Date">
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ environment=PYTHONPATH="/app/backend",UVICORN_LOOP="asyncio",UNSTRUCTURED_HAS_PA
|
|||
|
||||
# Celery Worker
|
||||
[program:celery-worker]
|
||||
command=celery -A app.celery_app worker --loglevel=info --concurrency=2 --pool=solo
|
||||
command=celery -A app.celery_app worker --loglevel=info --concurrency=2 --pool=solo --queues=surfsense,surfsense.connectors
|
||||
directory=/app/backend
|
||||
autostart=true
|
||||
autorestart=true
|
||||
|
|
|
|||
|
|
@ -249,7 +249,11 @@ async def create_surfsense_deep_agent(
|
|||
available_connectors is not None and "NOTION_CONNECTOR" in available_connectors
|
||||
)
|
||||
if not has_notion_connector:
|
||||
notion_tools = ["create_notion_page", "update_notion_page", "delete_notion_page"]
|
||||
notion_tools = [
|
||||
"create_notion_page",
|
||||
"update_notion_page",
|
||||
"delete_notion_page",
|
||||
]
|
||||
modified_disabled_tools.extend(notion_tools)
|
||||
|
||||
# Build tools using the async registry (includes MCP tools)
|
||||
|
|
|
|||
|
|
@ -55,19 +55,23 @@ def create_create_notion_page_tool(
|
|||
- url: URL to the created page (if success)
|
||||
- title: Page title (if success)
|
||||
- message: Result message
|
||||
|
||||
|
||||
IMPORTANT: If status is "rejected", the user explicitly declined the action.
|
||||
Respond with a brief acknowledgment (e.g., "Understood, I didn't create the page.")
|
||||
Respond with a brief acknowledgment (e.g., "Understood, I didn't create the page.")
|
||||
and move on. Do NOT ask for parent page IDs, troubleshoot, or suggest alternatives.
|
||||
|
||||
Examples:
|
||||
- "Create a Notion page titled 'Meeting Notes' with content 'Discussed project timeline'"
|
||||
- "Save this to Notion with title 'Research Summary'"
|
||||
"""
|
||||
logger.info(f"create_notion_page called: title='{title}', parent_page_id={parent_page_id}")
|
||||
|
||||
logger.info(
|
||||
f"create_notion_page called: title='{title}', parent_page_id={parent_page_id}"
|
||||
)
|
||||
|
||||
if db_session is None or search_space_id is None or user_id is None:
|
||||
logger.error("Notion tool not properly configured - missing required parameters")
|
||||
logger.error(
|
||||
"Notion tool not properly configured - missing required parameters"
|
||||
)
|
||||
return {
|
||||
"status": "error",
|
||||
"message": "Notion tool not properly configured. Please contact support.",
|
||||
|
|
@ -75,66 +79,81 @@ def create_create_notion_page_tool(
|
|||
|
||||
try:
|
||||
metadata_service = NotionToolMetadataService(db_session)
|
||||
context = await metadata_service.get_creation_context(search_space_id, user_id)
|
||||
|
||||
context = await metadata_service.get_creation_context(
|
||||
search_space_id, user_id
|
||||
)
|
||||
|
||||
if "error" in context:
|
||||
logger.error(f"Failed to fetch creation context: {context['error']}")
|
||||
return {
|
||||
"status": "error",
|
||||
"message": context["error"],
|
||||
}
|
||||
|
||||
|
||||
logger.info(f"Requesting approval for creating Notion page: '{title}'")
|
||||
approval = interrupt({
|
||||
"type": "notion_page_creation",
|
||||
"action": {
|
||||
"tool": "create_notion_page",
|
||||
"params": {
|
||||
"title": title,
|
||||
"content": content,
|
||||
"parent_page_id": parent_page_id,
|
||||
"connector_id": connector_id,
|
||||
approval = interrupt(
|
||||
{
|
||||
"type": "notion_page_creation",
|
||||
"action": {
|
||||
"tool": "create_notion_page",
|
||||
"params": {
|
||||
"title": title,
|
||||
"content": content,
|
||||
"parent_page_id": parent_page_id,
|
||||
"connector_id": connector_id,
|
||||
},
|
||||
},
|
||||
},
|
||||
"context": context,
|
||||
})
|
||||
|
||||
decisions = approval.get("decisions", [])
|
||||
"context": context,
|
||||
}
|
||||
)
|
||||
|
||||
decisions_raw = approval.get("decisions", []) if isinstance(approval, dict) else []
|
||||
decisions = decisions_raw if isinstance(decisions_raw, list) else [decisions_raw]
|
||||
decisions = [d for d in decisions if isinstance(d, dict)]
|
||||
if not decisions:
|
||||
logger.warning("No approval decision received")
|
||||
return {
|
||||
"status": "error",
|
||||
"message": "No approval decision received",
|
||||
}
|
||||
|
||||
|
||||
decision = decisions[0]
|
||||
decision_type = decision.get("type") or decision.get("decision_type")
|
||||
logger.info(f"User decision: {decision_type}")
|
||||
|
||||
|
||||
if decision_type == "reject":
|
||||
logger.info("Notion page creation rejected by user")
|
||||
return {
|
||||
"status": "rejected",
|
||||
"message": "User declined. The page was not created. Do not ask again or suggest alternatives.",
|
||||
}
|
||||
|
||||
edited_action = decision.get("edited_action", {})
|
||||
final_params = edited_action.get("args", {}) if edited_action else {}
|
||||
|
||||
|
||||
edited_action = decision.get("edited_action")
|
||||
final_params: dict[str, Any] = {}
|
||||
if isinstance(edited_action, dict):
|
||||
edited_args = edited_action.get("args")
|
||||
if isinstance(edited_args, dict):
|
||||
final_params = edited_args
|
||||
elif isinstance(decision.get("args"), dict):
|
||||
# Some interrupt payloads place args directly on the decision.
|
||||
final_params = decision["args"]
|
||||
|
||||
final_title = final_params.get("title", title)
|
||||
final_content = final_params.get("content", content)
|
||||
final_parent_page_id = final_params.get("parent_page_id", parent_page_id)
|
||||
final_connector_id = final_params.get("connector_id", connector_id)
|
||||
|
||||
|
||||
if not final_title or not final_title.strip():
|
||||
logger.error("Title is empty or contains only whitespace")
|
||||
return {
|
||||
"status": "error",
|
||||
"message": "Page title cannot be empty. Please provide a valid title.",
|
||||
}
|
||||
|
||||
logger.info(f"Creating Notion page with final params: title='{final_title}'")
|
||||
|
||||
|
||||
logger.info(
|
||||
f"Creating Notion page with final params: title='{final_title}'"
|
||||
)
|
||||
|
||||
from sqlalchemy.future import select
|
||||
|
||||
from app.db import SearchSourceConnector, SearchSourceConnectorType
|
||||
|
|
@ -152,7 +171,9 @@ def create_create_notion_page_tool(
|
|||
connector = result.scalars().first()
|
||||
|
||||
if not connector:
|
||||
logger.warning(f"No Notion connector found for search_space_id={search_space_id}")
|
||||
logger.warning(
|
||||
f"No Notion connector found for search_space_id={search_space_id}"
|
||||
)
|
||||
return {
|
||||
"status": "error",
|
||||
"message": "No Notion connector found. Please connect Notion in your workspace settings.",
|
||||
|
|
@ -192,19 +213,23 @@ def create_create_notion_page_tool(
|
|||
content=final_content,
|
||||
parent_page_id=final_parent_page_id,
|
||||
)
|
||||
logger.info(f"create_page result: {result.get('status')} - {result.get('message', '')}")
|
||||
logger.info(
|
||||
f"create_page result: {result.get('status')} - {result.get('message', '')}"
|
||||
)
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
from langgraph.errors import GraphInterrupt
|
||||
|
||||
|
||||
if isinstance(e, GraphInterrupt):
|
||||
raise
|
||||
|
||||
|
||||
logger.error(f"Error creating Notion page: {e}", exc_info=True)
|
||||
return {
|
||||
"status": "error",
|
||||
"message": str(e) if isinstance(e, ValueError) else f"Unexpected error: {e!s}",
|
||||
"message": str(e)
|
||||
if isinstance(e, ValueError)
|
||||
else f"Unexpected error: {e!s}",
|
||||
}
|
||||
|
||||
return create_notion_page
|
||||
|
|
|
|||
|
|
@ -59,10 +59,14 @@ def create_delete_notion_page_tool(
|
|||
- "Remove the 'Old Project Plan' Notion page"
|
||||
- "Archive the 'Draft Ideas' Notion page"
|
||||
"""
|
||||
logger.info(f"delete_notion_page called: page_title='{page_title}', delete_from_db={delete_from_db}")
|
||||
|
||||
logger.info(
|
||||
f"delete_notion_page called: page_title='{page_title}', delete_from_db={delete_from_db}"
|
||||
)
|
||||
|
||||
if db_session is None or search_space_id is None or user_id is None:
|
||||
logger.error("Notion tool not properly configured - missing required parameters")
|
||||
logger.error(
|
||||
"Notion tool not properly configured - missing required parameters"
|
||||
)
|
||||
return {
|
||||
"status": "error",
|
||||
"message": "Notion tool not properly configured. Please contact support.",
|
||||
|
|
@ -95,8 +99,10 @@ def create_delete_notion_page_tool(
|
|||
connector_id_from_context = context.get("account", {}).get("id")
|
||||
document_id = context.get("document_id")
|
||||
|
||||
logger.info(f"Requesting approval for deleting Notion page: '{page_title}' (page_id={page_id}, delete_from_db={delete_from_db})")
|
||||
|
||||
logger.info(
|
||||
f"Requesting approval for deleting Notion page: '{page_title}' (page_id={page_id}, delete_from_db={delete_from_db})"
|
||||
)
|
||||
|
||||
# Request approval before deleting
|
||||
approval = interrupt(
|
||||
{
|
||||
|
|
@ -113,7 +119,9 @@ def create_delete_notion_page_tool(
|
|||
}
|
||||
)
|
||||
|
||||
decisions = approval.get("decisions", [])
|
||||
decisions_raw = approval.get("decisions", []) if isinstance(approval, dict) else []
|
||||
decisions = decisions_raw if isinstance(decisions_raw, list) else [decisions_raw]
|
||||
decisions = [d for d in decisions if isinstance(d, dict)]
|
||||
if not decisions:
|
||||
logger.warning("No approval decision received")
|
||||
return {
|
||||
|
|
@ -133,14 +141,25 @@ def create_delete_notion_page_tool(
|
|||
}
|
||||
|
||||
# Extract edited action arguments (if user modified the checkbox)
|
||||
edited_action = decision.get("edited_action", {})
|
||||
final_params = edited_action.get("args", {}) if edited_action else {}
|
||||
edited_action = decision.get("edited_action")
|
||||
final_params: dict[str, Any] = {}
|
||||
if isinstance(edited_action, dict):
|
||||
edited_args = edited_action.get("args")
|
||||
if isinstance(edited_args, dict):
|
||||
final_params = edited_args
|
||||
elif isinstance(decision.get("args"), dict):
|
||||
# Some interrupt payloads place args directly on the decision.
|
||||
final_params = decision["args"]
|
||||
|
||||
final_page_id = final_params.get("page_id", page_id)
|
||||
final_connector_id = final_params.get("connector_id", connector_id_from_context)
|
||||
final_connector_id = final_params.get(
|
||||
"connector_id", connector_id_from_context
|
||||
)
|
||||
final_delete_from_db = final_params.get("delete_from_db", delete_from_db)
|
||||
|
||||
logger.info(f"Deleting Notion page with final params: page_id={final_page_id}, connector_id={final_connector_id}, delete_from_db={final_delete_from_db}")
|
||||
|
||||
logger.info(
|
||||
f"Deleting Notion page with final params: page_id={final_page_id}, connector_id={final_connector_id}, delete_from_db={final_delete_from_db}"
|
||||
)
|
||||
|
||||
from sqlalchemy.future import select
|
||||
|
||||
|
|
@ -184,11 +203,17 @@ def create_delete_notion_page_tool(
|
|||
|
||||
# Delete the page from Notion
|
||||
result = await notion_connector.delete_page(page_id=final_page_id)
|
||||
logger.info(f"delete_page result: {result.get('status')} - {result.get('message', '')}")
|
||||
|
||||
logger.info(
|
||||
f"delete_page result: {result.get('status')} - {result.get('message', '')}"
|
||||
)
|
||||
|
||||
# If deletion was successful and user wants to delete from DB
|
||||
deleted_from_db = False
|
||||
if result.get("status") == "success" and final_delete_from_db and document_id:
|
||||
if (
|
||||
result.get("status") == "success"
|
||||
and final_delete_from_db
|
||||
and document_id
|
||||
):
|
||||
try:
|
||||
from sqlalchemy.future import select
|
||||
|
||||
|
|
@ -204,21 +229,27 @@ def create_delete_notion_page_tool(
|
|||
await db_session.delete(document)
|
||||
await db_session.commit()
|
||||
deleted_from_db = True
|
||||
logger.info(f"Deleted document {document_id} from knowledge base")
|
||||
logger.info(
|
||||
f"Deleted document {document_id} from knowledge base"
|
||||
)
|
||||
else:
|
||||
logger.warning(f"Document {document_id} not found in DB")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to delete document from DB: {e}")
|
||||
# Don't fail the whole operation if DB deletion fails
|
||||
# The page is already deleted from Notion, so inform the user
|
||||
result["warning"] = f"Page deleted from Notion, but failed to remove from knowledge base: {e!s}"
|
||||
result["warning"] = (
|
||||
f"Page deleted from Notion, but failed to remove from knowledge base: {e!s}"
|
||||
)
|
||||
|
||||
# Update result with DB deletion status
|
||||
if result.get("status") == "success":
|
||||
result["deleted_from_db"] = deleted_from_db
|
||||
if deleted_from_db:
|
||||
result["message"] = f"{result.get('message', '')} (also removed from knowledge base)"
|
||||
|
||||
result["message"] = (
|
||||
f"{result.get('message', '')} (also removed from knowledge base)"
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
|
|
|
|||
|
|
@ -51,24 +51,28 @@ def create_update_notion_page_tool(
|
|||
- url: URL to the updated page (if success)
|
||||
- title: Current page title (if success)
|
||||
- message: Result message
|
||||
|
||||
IMPORTANT:
|
||||
|
||||
IMPORTANT:
|
||||
- If status is "rejected", the user explicitly declined the action.
|
||||
Respond with a brief acknowledgment (e.g., "Understood, I didn't update the page.")
|
||||
Respond with a brief acknowledgment (e.g., "Understood, I didn't update the page.")
|
||||
and move on. Do NOT ask for alternatives or troubleshoot.
|
||||
- If status is "not_found", inform the user conversationally using the exact message provided.
|
||||
Example: "I couldn't find the page '[page_title]' in your indexed Notion pages. [message details]"
|
||||
Do NOT treat this as an error. Do NOT invent information. Simply relay the message and
|
||||
Do NOT treat this as an error. Do NOT invent information. Simply relay the message and
|
||||
ask the user to verify the page title or check if it's been indexed.
|
||||
|
||||
Examples:
|
||||
- "Add 'New meeting notes from today' to the 'Meeting Notes' Notion page"
|
||||
- "Append the following to the 'Project Plan' Notion page: '# Status Update\n\nCompleted phase 1'"
|
||||
"""
|
||||
logger.info(f"update_notion_page called: page_title='{page_title}', content_length={len(content) if content else 0}")
|
||||
|
||||
logger.info(
|
||||
f"update_notion_page called: page_title='{page_title}', content_length={len(content) if content else 0}"
|
||||
)
|
||||
|
||||
if db_session is None or search_space_id is None or user_id is None:
|
||||
logger.error("Notion tool not properly configured - missing required parameters")
|
||||
logger.error(
|
||||
"Notion tool not properly configured - missing required parameters"
|
||||
)
|
||||
return {
|
||||
"status": "error",
|
||||
"message": "Notion tool not properly configured. Please contact support.",
|
||||
|
|
@ -106,7 +110,9 @@ def create_update_notion_page_tool(
|
|||
page_id = context.get("page_id")
|
||||
connector_id_from_context = context.get("account", {}).get("id")
|
||||
|
||||
logger.info(f"Requesting approval for updating Notion page: '{page_title}' (page_id={page_id})")
|
||||
logger.info(
|
||||
f"Requesting approval for updating Notion page: '{page_title}' (page_id={page_id})"
|
||||
)
|
||||
approval = interrupt(
|
||||
{
|
||||
"type": "notion_page_update",
|
||||
|
|
@ -122,7 +128,9 @@ def create_update_notion_page_tool(
|
|||
}
|
||||
)
|
||||
|
||||
decisions = approval.get("decisions", [])
|
||||
decisions_raw = approval.get("decisions", []) if isinstance(approval, dict) else []
|
||||
decisions = decisions_raw if isinstance(decisions_raw, list) else [decisions_raw]
|
||||
decisions = [d for d in decisions if isinstance(d, dict)]
|
||||
if not decisions:
|
||||
logger.warning("No approval decision received")
|
||||
return {
|
||||
|
|
@ -141,14 +149,25 @@ def create_update_notion_page_tool(
|
|||
"message": "User declined. The page was not updated. Do not ask again or suggest alternatives.",
|
||||
}
|
||||
|
||||
edited_action = decision.get("edited_action", {})
|
||||
final_params = edited_action.get("args", {}) if edited_action else {}
|
||||
edited_action = decision.get("edited_action")
|
||||
final_params: dict[str, Any] = {}
|
||||
if isinstance(edited_action, dict):
|
||||
edited_args = edited_action.get("args")
|
||||
if isinstance(edited_args, dict):
|
||||
final_params = edited_args
|
||||
elif isinstance(decision.get("args"), dict):
|
||||
# Some interrupt payloads place args directly on the decision.
|
||||
final_params = decision["args"]
|
||||
|
||||
final_page_id = final_params.get("page_id", page_id)
|
||||
final_content = final_params.get("content", content)
|
||||
final_connector_id = final_params.get("connector_id", connector_id_from_context)
|
||||
final_connector_id = final_params.get(
|
||||
"connector_id", connector_id_from_context
|
||||
)
|
||||
|
||||
logger.info(f"Updating Notion page with final params: page_id={final_page_id}, has_content={final_content is not None}")
|
||||
logger.info(
|
||||
f"Updating Notion page with final params: page_id={final_page_id}, has_content={final_content is not None}"
|
||||
)
|
||||
|
||||
from sqlalchemy.future import select
|
||||
|
||||
|
|
@ -192,7 +211,9 @@ def create_update_notion_page_tool(
|
|||
page_id=final_page_id,
|
||||
content=final_content,
|
||||
)
|
||||
logger.info(f"update_page result: {result.get('status')} - {result.get('message', '')}")
|
||||
logger.info(
|
||||
f"update_page result: {result.get('status')} - {result.get('message', '')}"
|
||||
)
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
|
|
|
|||
|
|
@ -82,7 +82,6 @@ celery_app = Celery(
|
|||
"app.tasks.celery_tasks.blocknote_migration_tasks",
|
||||
"app.tasks.celery_tasks.document_reindex_tasks",
|
||||
"app.tasks.celery_tasks.stale_notification_cleanup_task",
|
||||
"app.tasks.celery_tasks.connector_deletion_task",
|
||||
],
|
||||
)
|
||||
|
||||
|
|
@ -143,7 +142,6 @@ celery_app.conf.update(
|
|||
"index_bookstack_pages": {"queue": CONNECTORS_QUEUE},
|
||||
"index_obsidian_vault": {"queue": CONNECTORS_QUEUE},
|
||||
"index_composio_connector": {"queue": CONNECTORS_QUEUE},
|
||||
"delete_connector_with_documents": {"queue": CONNECTORS_QUEUE},
|
||||
# Everything else (document processing, podcasts, reindexing,
|
||||
# schedule checker, cleanup) stays on the default fast queue.
|
||||
},
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import asyncio
|
||||
import contextlib
|
||||
import logging
|
||||
import re
|
||||
from collections.abc import Awaitable, Callable
|
||||
|
|
@ -220,6 +221,7 @@ class NotionHistoryConnector:
|
|||
|
||||
# Refresh token
|
||||
from app.routes.notion_add_connector_route import refresh_notion_token
|
||||
|
||||
connector = await refresh_notion_token(self._session, connector)
|
||||
|
||||
# Reload credentials after refresh
|
||||
|
|
@ -440,6 +442,16 @@ class NotionHistoryConnector:
|
|||
if page_title not in self._pages_with_skipped_content:
|
||||
self._pages_with_skipped_content.append(page_title)
|
||||
|
||||
@staticmethod
|
||||
def _api_error_message(error: APIResponseError) -> str:
|
||||
"""Extract a stable, human-readable message from Notion API errors."""
|
||||
body = getattr(error, "body", None)
|
||||
if isinstance(body, dict):
|
||||
return str(body.get("message", str(error)))
|
||||
if body:
|
||||
return str(body)
|
||||
return str(error)
|
||||
|
||||
async def __aenter__(self):
|
||||
"""Async context manager entry."""
|
||||
return self
|
||||
|
|
@ -804,7 +816,7 @@ class NotionHistoryConnector:
|
|||
results = response.get("results", [])
|
||||
if results:
|
||||
return results[0]["id"]
|
||||
|
||||
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
|
|
@ -835,59 +847,81 @@ class NotionHistoryConnector:
|
|||
|
||||
# Heading 1
|
||||
if line.startswith("# "):
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "heading_1",
|
||||
"heading_1": {
|
||||
"rich_text": [{"type": "text", "text": {"content": line[2:]}}]
|
||||
},
|
||||
})
|
||||
blocks.append(
|
||||
{
|
||||
"object": "block",
|
||||
"type": "heading_1",
|
||||
"heading_1": {
|
||||
"rich_text": [
|
||||
{"type": "text", "text": {"content": line[2:]}}
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
# Heading 2
|
||||
elif line.startswith("## "):
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "heading_2",
|
||||
"heading_2": {
|
||||
"rich_text": [{"type": "text", "text": {"content": line[3:]}}]
|
||||
},
|
||||
})
|
||||
blocks.append(
|
||||
{
|
||||
"object": "block",
|
||||
"type": "heading_2",
|
||||
"heading_2": {
|
||||
"rich_text": [
|
||||
{"type": "text", "text": {"content": line[3:]}}
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
# Heading 3
|
||||
elif line.startswith("### "):
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "heading_3",
|
||||
"heading_3": {
|
||||
"rich_text": [{"type": "text", "text": {"content": line[4:]}}]
|
||||
},
|
||||
})
|
||||
blocks.append(
|
||||
{
|
||||
"object": "block",
|
||||
"type": "heading_3",
|
||||
"heading_3": {
|
||||
"rich_text": [
|
||||
{"type": "text", "text": {"content": line[4:]}}
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
# Bullet list
|
||||
elif line.startswith("- ") or line.startswith("* "):
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "bulleted_list_item",
|
||||
"bulleted_list_item": {
|
||||
"rich_text": [{"type": "text", "text": {"content": line[2:]}}]
|
||||
},
|
||||
})
|
||||
blocks.append(
|
||||
{
|
||||
"object": "block",
|
||||
"type": "bulleted_list_item",
|
||||
"bulleted_list_item": {
|
||||
"rich_text": [
|
||||
{"type": "text", "text": {"content": line[2:]}}
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
# Numbered list
|
||||
elif (match := re.match(r'^(\d+)\.\s+(.*)$', line)):
|
||||
elif match := re.match(r"^(\d+)\.\s+(.*)$", line):
|
||||
content = match.group(2) # Extract text after "number. "
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "numbered_list_item",
|
||||
"numbered_list_item": {
|
||||
"rich_text": [{"type": "text", "text": {"content": content}}]
|
||||
},
|
||||
})
|
||||
blocks.append(
|
||||
{
|
||||
"object": "block",
|
||||
"type": "numbered_list_item",
|
||||
"numbered_list_item": {
|
||||
"rich_text": [
|
||||
{"type": "text", "text": {"content": content}}
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
# Regular paragraph
|
||||
else:
|
||||
blocks.append({
|
||||
"object": "block",
|
||||
"type": "paragraph",
|
||||
"paragraph": {
|
||||
"rich_text": [{"type": "text", "text": {"content": line}}]
|
||||
},
|
||||
})
|
||||
blocks.append(
|
||||
{
|
||||
"object": "block",
|
||||
"type": "paragraph",
|
||||
"paragraph": {
|
||||
"rich_text": [{"type": "text", "text": {"content": line}}]
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
return blocks
|
||||
|
||||
|
|
@ -914,8 +948,10 @@ class NotionHistoryConnector:
|
|||
APIResponseError: If Notion API returns an error
|
||||
"""
|
||||
try:
|
||||
logger.info(f"Creating Notion page: title='{title}', parent_page_id={parent_page_id}")
|
||||
|
||||
logger.info(
|
||||
f"Creating Notion page: title='{title}', parent_page_id={parent_page_id}"
|
||||
)
|
||||
|
||||
# Get Notion client
|
||||
notion = await self._get_client()
|
||||
|
||||
|
|
@ -924,14 +960,16 @@ class NotionHistoryConnector:
|
|||
|
||||
# Prepare parent - find first available page if not provided
|
||||
if not parent_page_id:
|
||||
logger.info("No parent_page_id provided, searching for first accessible page...")
|
||||
logger.info(
|
||||
"No parent_page_id provided, searching for first accessible page..."
|
||||
)
|
||||
parent_page_id = await self._get_first_accessible_parent()
|
||||
if not parent_page_id:
|
||||
logger.warning("No accessible parent pages found")
|
||||
return {
|
||||
"status": "error",
|
||||
"message": "Could not find any accessible Notion pages to use as parent. "
|
||||
"Please make sure your Notion integration has access to at least one page.",
|
||||
"Please make sure your Notion integration has access to at least one page.",
|
||||
}
|
||||
logger.info(f"Using parent_page_id: {parent_page_id}")
|
||||
|
||||
|
|
@ -939,9 +977,7 @@ class NotionHistoryConnector:
|
|||
|
||||
# Create the page with standard title property
|
||||
properties = {
|
||||
"title": {
|
||||
"title": [{"type": "text", "text": {"content": title}}]
|
||||
}
|
||||
"title": {"title": [{"type": "text", "text": {"content": title}}]}
|
||||
}
|
||||
|
||||
response = await self._api_call_with_retry(
|
||||
|
|
@ -959,9 +995,7 @@ class NotionHistoryConnector:
|
|||
for i in range(100, len(children), 100):
|
||||
batch = children[i : i + 100]
|
||||
await self._api_call_with_retry(
|
||||
notion.blocks.children.append,
|
||||
block_id=page_id,
|
||||
children=batch
|
||||
notion.blocks.children.append, block_id=page_id, children=batch
|
||||
)
|
||||
|
||||
return {
|
||||
|
|
@ -974,7 +1008,7 @@ class NotionHistoryConnector:
|
|||
|
||||
except APIResponseError as e:
|
||||
logger.error(f"Notion API error creating page: {e}")
|
||||
error_msg = e.body.get("message", str(e)) if hasattr(e, "body") else str(e)
|
||||
error_msg = self._api_error_message(e)
|
||||
return {
|
||||
"status": "error",
|
||||
"message": f"Failed to create Notion page: {error_msg}",
|
||||
|
|
@ -991,7 +1025,7 @@ class NotionHistoryConnector:
|
|||
) -> dict[str, Any]:
|
||||
"""
|
||||
Update an existing Notion page by appending new content.
|
||||
|
||||
|
||||
Note: Content is appended to the page, not replaced.
|
||||
|
||||
Args:
|
||||
|
|
@ -1013,7 +1047,9 @@ class NotionHistoryConnector:
|
|||
try:
|
||||
children = self._markdown_to_blocks(content)
|
||||
if not children:
|
||||
logger.warning("No blocks generated from content, skipping append")
|
||||
logger.warning(
|
||||
"No blocks generated from content, skipping append"
|
||||
)
|
||||
return {
|
||||
"status": "error",
|
||||
"message": "Content conversion failed: no valid blocks generated",
|
||||
|
|
@ -1032,9 +1068,11 @@ class NotionHistoryConnector:
|
|||
await self._api_call_with_retry(
|
||||
notion.blocks.children.append,
|
||||
block_id=page_id,
|
||||
children=batch
|
||||
children=batch,
|
||||
)
|
||||
logger.info(f"Successfully appended {len(children)} new blocks to page {page_id}")
|
||||
logger.info(
|
||||
f"Successfully appended {len(children)} new blocks to page {page_id}"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to append content blocks: {e}")
|
||||
return {
|
||||
|
|
@ -1044,8 +1082,7 @@ class NotionHistoryConnector:
|
|||
|
||||
# Get updated page info
|
||||
response = await self._api_call_with_retry(
|
||||
notion.pages.retrieve,
|
||||
page_id=page_id
|
||||
notion.pages.retrieve, page_id=page_id
|
||||
)
|
||||
page_url = response["url"]
|
||||
page_title = response["properties"]["title"]["title"][0]["text"]["content"]
|
||||
|
|
@ -1060,7 +1097,7 @@ class NotionHistoryConnector:
|
|||
|
||||
except APIResponseError as e:
|
||||
logger.error(f"Notion API error updating page: {e}")
|
||||
error_msg = e.body.get("message", str(e)) if hasattr(e, "body") else str(e)
|
||||
error_msg = self._api_error_message(e)
|
||||
return {
|
||||
"status": "error",
|
||||
"message": f"Failed to update Notion page: {error_msg}",
|
||||
|
|
@ -1092,18 +1129,14 @@ class NotionHistoryConnector:
|
|||
|
||||
# Archive the page (Notion's way of "deleting")
|
||||
response = await self._api_call_with_retry(
|
||||
notion.pages.update,
|
||||
page_id=page_id,
|
||||
archived=True
|
||||
notion.pages.update, page_id=page_id, archived=True
|
||||
)
|
||||
|
||||
page_title = "Unknown"
|
||||
try:
|
||||
with contextlib.suppress(KeyError, IndexError):
|
||||
page_title = response["properties"]["title"]["title"][0]["text"][
|
||||
"content"
|
||||
]
|
||||
except (KeyError, IndexError):
|
||||
pass
|
||||
|
||||
return {
|
||||
"status": "success",
|
||||
|
|
@ -1113,14 +1146,7 @@ class NotionHistoryConnector:
|
|||
|
||||
except APIResponseError as e:
|
||||
logger.error(f"Notion API error deleting page: {e}")
|
||||
# Handle both dict and string body formats
|
||||
if hasattr(e, "body"):
|
||||
if isinstance(e.body, dict):
|
||||
error_msg = e.body.get("message", str(e))
|
||||
else:
|
||||
error_msg = str(e.body) if e.body else str(e)
|
||||
else:
|
||||
error_msg = str(e)
|
||||
error_msg = self._api_error_message(e)
|
||||
return {
|
||||
"status": "error",
|
||||
"message": f"Failed to delete Notion page: {error_msg}",
|
||||
|
|
|
|||
|
|
@ -97,7 +97,10 @@ async def create_documents(
|
|||
raise HTTPException(status_code=400, detail="Invalid document type")
|
||||
|
||||
await session.commit()
|
||||
return {"message": "Documents processed successfully"}
|
||||
return {
|
||||
"message": "Documents queued for background processing",
|
||||
"status": "queued",
|
||||
}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
|
|
|
|||
|
|
@ -635,8 +635,16 @@ async def delete_thread(
|
|||
|
||||
# For PRIVATE threads, only the creator can delete
|
||||
# For SEARCH_SPACE threads, any member with permission can delete
|
||||
# Legacy threads (created_by_id is NULL) have no recorded creator,
|
||||
# so we skip strict ownership and fall through to legacy handling
|
||||
# which allows the search space owner to delete them
|
||||
if db_thread.visibility == ChatVisibility.PRIVATE:
|
||||
await check_thread_access(session, db_thread, user, require_ownership=True)
|
||||
await check_thread_access(
|
||||
session,
|
||||
db_thread,
|
||||
user,
|
||||
require_ownership=(db_thread.created_by_id is not None),
|
||||
)
|
||||
|
||||
await session.delete(db_thread)
|
||||
await session.commit()
|
||||
|
|
|
|||
|
|
@ -532,14 +532,16 @@ async def delete_search_source_connector(
|
|||
"""
|
||||
Delete a search source connector and all its associated documents.
|
||||
|
||||
The deletion runs in background via Celery task. User is notified
|
||||
via the notification system when complete (no polling required).
|
||||
The deletion happens inline (documents are deleted in batches,
|
||||
then the connector record is removed).
|
||||
|
||||
Requires CONNECTORS_DELETE permission.
|
||||
"""
|
||||
from app.tasks.celery_tasks.connector_deletion_task import (
|
||||
delete_connector_with_documents_task,
|
||||
)
|
||||
from sqlalchemy import delete as sa_delete, func
|
||||
|
||||
from app.db import Document
|
||||
|
||||
deletion_batch_size = 500
|
||||
|
||||
try:
|
||||
# Get the connector first
|
||||
|
|
@ -562,12 +564,10 @@ async def delete_search_source_connector(
|
|||
"You don't have permission to delete this connector",
|
||||
)
|
||||
|
||||
# Store connector info before we queue the deletion task
|
||||
# Store connector info before deletion
|
||||
connector_name = db_connector.name
|
||||
connector_type = db_connector.connector_type.value
|
||||
search_space_id = db_connector.search_space_id
|
||||
|
||||
# Delete any periodic schedule associated with this connector (lightweight, sync)
|
||||
# Delete any periodic schedule associated with this connector
|
||||
if db_connector.periodic_indexing_enabled:
|
||||
success = delete_periodic_schedule(connector_id)
|
||||
if not success:
|
||||
|
|
@ -575,7 +575,7 @@ async def delete_search_source_connector(
|
|||
f"Failed to delete periodic schedule for connector {connector_id}"
|
||||
)
|
||||
|
||||
# For Composio connectors, delete the connected account in Composio (lightweight API call, sync)
|
||||
# For Composio connectors, delete the connected account in Composio
|
||||
composio_connector_types = [
|
||||
SearchSourceConnectorType.COMPOSIO_GOOGLE_DRIVE_CONNECTOR,
|
||||
SearchSourceConnectorType.COMPOSIO_GMAIL_CONNECTOR,
|
||||
|
|
@ -602,30 +602,58 @@ async def delete_search_source_connector(
|
|||
f"for connector {connector_id}"
|
||||
)
|
||||
except Exception as composio_error:
|
||||
# Log but don't fail the deletion - Composio account may already be deleted
|
||||
logger.warning(
|
||||
f"Error deleting Composio connected account {composio_connected_account_id}: {composio_error!s}"
|
||||
)
|
||||
|
||||
# Queue background task to delete documents and connector
|
||||
# This handles potentially large document counts without blocking the API
|
||||
delete_connector_with_documents_task.delay(
|
||||
connector_id=connector_id,
|
||||
user_id=str(user.id),
|
||||
search_space_id=search_space_id,
|
||||
connector_name=connector_name,
|
||||
connector_type=connector_type,
|
||||
# Delete documents in batches (chunks are deleted via CASCADE)
|
||||
total_deleted = 0
|
||||
count_result = await session.execute(
|
||||
select(func.count(Document.id)).where(Document.connector_id == connector_id)
|
||||
)
|
||||
total_docs = count_result.scalar() or 0
|
||||
|
||||
logger.info(
|
||||
f"Queued deletion task for connector {connector_id} ({connector_name})"
|
||||
f"Starting deletion of connector {connector_id} ({connector_name}). "
|
||||
f"Documents to delete: {total_docs}"
|
||||
)
|
||||
|
||||
while True:
|
||||
result = await session.execute(
|
||||
select(Document.id)
|
||||
.where(Document.connector_id == connector_id)
|
||||
.limit(deletion_batch_size)
|
||||
)
|
||||
doc_ids = [row[0] for row in result.fetchall()]
|
||||
|
||||
if not doc_ids:
|
||||
break
|
||||
|
||||
await session.execute(sa_delete(Document).where(Document.id.in_(doc_ids)))
|
||||
await session.commit()
|
||||
|
||||
total_deleted += len(doc_ids)
|
||||
logger.info(
|
||||
f"Deleted batch of {len(doc_ids)} documents. "
|
||||
f"Progress: {total_deleted}/{total_docs}"
|
||||
)
|
||||
|
||||
# Delete the connector record
|
||||
await session.delete(db_connector)
|
||||
await session.commit()
|
||||
|
||||
logger.info(
|
||||
f"Connector {connector_id} ({connector_name}) deleted successfully. "
|
||||
f"Total documents deleted: {total_deleted}"
|
||||
)
|
||||
|
||||
doc_text = "document" if total_deleted == 1 else "documents"
|
||||
return {
|
||||
"message": "Connector deletion started. You will be notified when complete.",
|
||||
"status": "queued",
|
||||
"message": f"Connector '{connector_name}' deleted. {total_deleted} {doc_text} removed.",
|
||||
"status": "completed",
|
||||
"connector_id": connector_id,
|
||||
"connector_name": connector_name,
|
||||
"documents_deleted": total_deleted,
|
||||
}
|
||||
except HTTPException:
|
||||
raise
|
||||
|
|
|
|||
|
|
@ -518,7 +518,9 @@ class VercelStreamingService:
|
|||
normalized_payload = self._normalize_interrupt_payload(interrupt_value)
|
||||
return self.format_data("interrupt-request", normalized_payload)
|
||||
|
||||
def _normalize_interrupt_payload(self, interrupt_value: dict[str, Any]) -> dict[str, Any]:
|
||||
def _normalize_interrupt_payload(
|
||||
self, interrupt_value: dict[str, Any]
|
||||
) -> dict[str, Any]:
|
||||
"""Normalize interrupt payloads from different sources into a consistent format.
|
||||
|
||||
Handles two interrupt sources:
|
||||
|
|
|
|||
|
|
@ -1,269 +0,0 @@
|
|||
"""Celery task for background connector deletion.
|
||||
|
||||
This task handles the deletion of all documents associated with a connector
|
||||
in the background, then deletes the connector itself. User is notified via
|
||||
the notification system when complete (no polling required).
|
||||
|
||||
Features:
|
||||
- Batch deletion to handle large document counts
|
||||
- Automatic retry on failure
|
||||
- Progress tracking via notifications
|
||||
- Handles both success and failure notifications
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from uuid import UUID
|
||||
|
||||
from sqlalchemy import delete, func, select
|
||||
from sqlalchemy.ext.asyncio import async_sessionmaker, create_async_engine
|
||||
from sqlalchemy.pool import NullPool
|
||||
|
||||
from app.celery_app import celery_app
|
||||
from app.config import config
|
||||
from app.db import Document, Notification, SearchSourceConnector
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Batch size for document deletion
|
||||
DELETION_BATCH_SIZE = 500
|
||||
|
||||
|
||||
def _get_celery_session_maker():
|
||||
"""Create async session maker for Celery tasks."""
|
||||
engine = create_async_engine(
|
||||
config.DATABASE_URL,
|
||||
poolclass=NullPool,
|
||||
echo=False,
|
||||
)
|
||||
return async_sessionmaker(engine, expire_on_commit=False), engine
|
||||
|
||||
|
||||
@celery_app.task(
|
||||
bind=True,
|
||||
name="delete_connector_with_documents",
|
||||
max_retries=3,
|
||||
default_retry_delay=60,
|
||||
autoretry_for=(Exception,),
|
||||
retry_backoff=True,
|
||||
)
|
||||
def delete_connector_with_documents_task(
|
||||
self,
|
||||
connector_id: int,
|
||||
user_id: str,
|
||||
search_space_id: int,
|
||||
connector_name: str,
|
||||
connector_type: str,
|
||||
):
|
||||
"""
|
||||
Background task to delete a connector and all its associated documents.
|
||||
|
||||
Creates a notification when complete (success or failure).
|
||||
No polling required - user sees notification in UI.
|
||||
|
||||
Args:
|
||||
connector_id: ID of the connector to delete
|
||||
user_id: ID of the user who initiated the deletion
|
||||
search_space_id: ID of the search space
|
||||
connector_name: Name of the connector (for notification message)
|
||||
connector_type: Type of the connector (for logging)
|
||||
"""
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
|
||||
try:
|
||||
return loop.run_until_complete(
|
||||
_delete_connector_async(
|
||||
connector_id=connector_id,
|
||||
user_id=user_id,
|
||||
search_space_id=search_space_id,
|
||||
connector_name=connector_name,
|
||||
connector_type=connector_type,
|
||||
)
|
||||
)
|
||||
finally:
|
||||
loop.close()
|
||||
|
||||
|
||||
async def _delete_connector_async(
|
||||
connector_id: int,
|
||||
user_id: str,
|
||||
search_space_id: int,
|
||||
connector_name: str,
|
||||
connector_type: str,
|
||||
) -> dict:
|
||||
"""
|
||||
Async implementation of connector deletion.
|
||||
|
||||
Steps:
|
||||
1. Count total documents to delete
|
||||
2. Delete documents in batches (chunks cascade automatically)
|
||||
3. Delete the connector record
|
||||
4. Create success notification
|
||||
|
||||
On failure, creates failure notification and re-raises exception.
|
||||
"""
|
||||
session_maker, engine = _get_celery_session_maker()
|
||||
total_deleted = 0
|
||||
|
||||
try:
|
||||
async with session_maker() as session:
|
||||
# Step 1: Count total documents for this connector
|
||||
count_result = await session.execute(
|
||||
select(func.count(Document.id)).where(
|
||||
Document.connector_id == connector_id
|
||||
)
|
||||
)
|
||||
total_docs = count_result.scalar() or 0
|
||||
|
||||
logger.info(
|
||||
f"Starting deletion of connector {connector_id} ({connector_name}). "
|
||||
f"Documents to delete: {total_docs}"
|
||||
)
|
||||
|
||||
# Step 2: Delete documents in batches
|
||||
while True:
|
||||
# Get batch of document IDs
|
||||
result = await session.execute(
|
||||
select(Document.id)
|
||||
.where(Document.connector_id == connector_id)
|
||||
.limit(DELETION_BATCH_SIZE)
|
||||
)
|
||||
doc_ids = [row[0] for row in result.fetchall()]
|
||||
|
||||
if not doc_ids:
|
||||
break
|
||||
|
||||
# Delete this batch (chunks are deleted via CASCADE)
|
||||
await session.execute(delete(Document).where(Document.id.in_(doc_ids)))
|
||||
await session.commit()
|
||||
|
||||
total_deleted += len(doc_ids)
|
||||
logger.info(
|
||||
f"Deleted batch of {len(doc_ids)} documents. "
|
||||
f"Progress: {total_deleted}/{total_docs}"
|
||||
)
|
||||
|
||||
# Step 3: Delete the connector record
|
||||
result = await session.execute(
|
||||
select(SearchSourceConnector).where(
|
||||
SearchSourceConnector.id == connector_id
|
||||
)
|
||||
)
|
||||
connector = result.scalar_one_or_none()
|
||||
|
||||
if connector:
|
||||
await session.delete(connector)
|
||||
logger.info(f"Deleted connector record: {connector_id}")
|
||||
else:
|
||||
logger.warning(
|
||||
f"Connector {connector_id} not found - may have been already deleted"
|
||||
)
|
||||
|
||||
# Step 4: Create success notification
|
||||
doc_text = "document" if total_deleted == 1 else "documents"
|
||||
notification = Notification(
|
||||
user_id=UUID(user_id),
|
||||
search_space_id=search_space_id,
|
||||
type="connector_deletion",
|
||||
title=f"{connector_name} removed",
|
||||
message=f"Cleanup complete. {total_deleted} {doc_text} removed.",
|
||||
notification_metadata={
|
||||
"connector_id": connector_id,
|
||||
"connector_name": connector_name,
|
||||
"connector_type": connector_type,
|
||||
"documents_deleted": total_deleted,
|
||||
"status": "completed",
|
||||
},
|
||||
)
|
||||
session.add(notification)
|
||||
await session.commit()
|
||||
|
||||
logger.info(
|
||||
f"Connector {connector_id} ({connector_name}) deleted successfully. "
|
||||
f"Total documents deleted: {total_deleted}"
|
||||
)
|
||||
|
||||
return {
|
||||
"status": "success",
|
||||
"connector_id": connector_id,
|
||||
"connector_name": connector_name,
|
||||
"documents_deleted": total_deleted,
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to delete connector {connector_id} ({connector_name}): {e!s}",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
# Create failure notification
|
||||
try:
|
||||
async with session_maker() as session:
|
||||
notification = Notification(
|
||||
user_id=UUID(user_id),
|
||||
search_space_id=search_space_id,
|
||||
type="connector_deletion",
|
||||
title=f"Failed to Remove {connector_name}",
|
||||
message="Something went wrong while removing this connector. Please try again.",
|
||||
notification_metadata={
|
||||
"connector_id": connector_id,
|
||||
"connector_name": connector_name,
|
||||
"connector_type": connector_type,
|
||||
"documents_deleted": total_deleted,
|
||||
"status": "failed",
|
||||
"error": str(e),
|
||||
},
|
||||
)
|
||||
session.add(notification)
|
||||
await session.commit()
|
||||
except Exception as notify_error:
|
||||
logger.error(
|
||||
f"Failed to create failure notification: {notify_error!s}",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
# Re-raise to trigger Celery retry
|
||||
raise
|
||||
|
||||
finally:
|
||||
await engine.dispose()
|
||||
|
||||
|
||||
async def delete_documents_by_connector_id(
|
||||
session,
|
||||
connector_id: int,
|
||||
batch_size: int = DELETION_BATCH_SIZE,
|
||||
) -> int:
|
||||
"""
|
||||
Delete all documents associated with a connector in batches.
|
||||
|
||||
This is a utility function that can be used independently of the Celery task
|
||||
for synchronous deletion scenarios (e.g., small document counts).
|
||||
|
||||
Args:
|
||||
session: AsyncSession instance
|
||||
connector_id: ID of the connector
|
||||
batch_size: Number of documents to delete per batch
|
||||
|
||||
Returns:
|
||||
Total number of documents deleted
|
||||
"""
|
||||
total_deleted = 0
|
||||
|
||||
while True:
|
||||
result = await session.execute(
|
||||
select(Document.id)
|
||||
.where(Document.connector_id == connector_id)
|
||||
.limit(batch_size)
|
||||
)
|
||||
doc_ids = [row[0] for row in result.fetchall()]
|
||||
|
||||
if not doc_ids:
|
||||
break
|
||||
|
||||
await session.execute(delete(Document).where(Document.id.in_(doc_ids)))
|
||||
await session.commit()
|
||||
total_deleted += len(doc_ids)
|
||||
|
||||
return total_deleted
|
||||
|
|
@ -377,6 +377,35 @@ async def _stream_agent_events(
|
|||
status="in_progress",
|
||||
items=last_active_step_items,
|
||||
)
|
||||
elif tool_name == "generate_report":
|
||||
report_topic = (
|
||||
tool_input.get("topic", "Report")
|
||||
if isinstance(tool_input, dict)
|
||||
else "Report"
|
||||
)
|
||||
report_style = (
|
||||
tool_input.get("report_style", "detailed")
|
||||
if isinstance(tool_input, dict)
|
||||
else "detailed"
|
||||
)
|
||||
content_len = len(
|
||||
tool_input.get("source_content", "")
|
||||
if isinstance(tool_input, dict)
|
||||
else ""
|
||||
)
|
||||
last_active_step_title = "Generating report"
|
||||
last_active_step_items = [
|
||||
f"Topic: {report_topic}",
|
||||
f"Style: {report_style}",
|
||||
f"Source content: {content_len:,} characters",
|
||||
"Generating report with LLM...",
|
||||
]
|
||||
yield streaming_service.format_thinking_step(
|
||||
step_id=tool_step_id,
|
||||
title="Generating report",
|
||||
status="in_progress",
|
||||
items=last_active_step_items,
|
||||
)
|
||||
else:
|
||||
last_active_step_title = f"Using {tool_name.replace('_', ' ')}"
|
||||
last_active_step_items = []
|
||||
|
|
@ -544,6 +573,48 @@ async def _stream_agent_events(
|
|||
status="completed",
|
||||
items=completed_items,
|
||||
)
|
||||
elif tool_name == "generate_report":
|
||||
report_status = (
|
||||
tool_output.get("status", "unknown")
|
||||
if isinstance(tool_output, dict)
|
||||
else "unknown"
|
||||
)
|
||||
report_title = (
|
||||
tool_output.get("title", "Report")
|
||||
if isinstance(tool_output, dict)
|
||||
else "Report"
|
||||
)
|
||||
word_count = (
|
||||
tool_output.get("word_count", 0)
|
||||
if isinstance(tool_output, dict)
|
||||
else 0
|
||||
)
|
||||
|
||||
if report_status == "ready":
|
||||
completed_items = [
|
||||
f"Title: {report_title}",
|
||||
f"Words: {word_count:,}",
|
||||
"Report generated successfully",
|
||||
]
|
||||
elif report_status == "failed":
|
||||
error_msg = (
|
||||
tool_output.get("error", "Unknown error")
|
||||
if isinstance(tool_output, dict)
|
||||
else "Unknown error"
|
||||
)
|
||||
completed_items = [
|
||||
f"Title: {report_title}",
|
||||
f"Error: {error_msg[:50]}",
|
||||
]
|
||||
else:
|
||||
completed_items = last_active_step_items
|
||||
|
||||
yield streaming_service.format_thinking_step(
|
||||
step_id=original_step_id,
|
||||
title="Generating report",
|
||||
status="completed",
|
||||
items=completed_items,
|
||||
)
|
||||
elif tool_name == "ls":
|
||||
if isinstance(tool_output, dict):
|
||||
ls_output = tool_output.get("result", "")
|
||||
|
|
@ -693,10 +764,44 @@ async def _stream_agent_events(
|
|||
yield streaming_service.format_terminal_info(
|
||||
"Knowledge base search completed", "success"
|
||||
)
|
||||
elif tool_name in ("create_notion_page", "update_notion_page", "delete_notion_page"):
|
||||
elif tool_name == "generate_report":
|
||||
# Stream the full report result so frontend can render the ReportCard
|
||||
yield streaming_service.format_tool_output_available(
|
||||
tool_call_id,
|
||||
tool_output if isinstance(tool_output, dict) else {"result": tool_output},
|
||||
tool_output
|
||||
if isinstance(tool_output, dict)
|
||||
else {"result": tool_output},
|
||||
)
|
||||
# Send appropriate terminal message based on status
|
||||
if (
|
||||
isinstance(tool_output, dict)
|
||||
and tool_output.get("status") == "ready"
|
||||
):
|
||||
word_count = tool_output.get("word_count", 0)
|
||||
yield streaming_service.format_terminal_info(
|
||||
f"Report generated: {tool_output.get('title', 'Report')} ({word_count:,} words)",
|
||||
"success",
|
||||
)
|
||||
else:
|
||||
error_msg = (
|
||||
tool_output.get("error", "Unknown error")
|
||||
if isinstance(tool_output, dict)
|
||||
else "Unknown error"
|
||||
)
|
||||
yield streaming_service.format_terminal_info(
|
||||
f"Report generation failed: {error_msg}",
|
||||
"error",
|
||||
)
|
||||
elif tool_name in (
|
||||
"create_notion_page",
|
||||
"update_notion_page",
|
||||
"delete_notion_page",
|
||||
):
|
||||
yield streaming_service.format_tool_output_available(
|
||||
tool_call_id,
|
||||
tool_output
|
||||
if isinstance(tool_output, dict)
|
||||
else {"result": tool_output},
|
||||
)
|
||||
else:
|
||||
yield streaming_service.format_tool_output_available(
|
||||
|
|
|
|||
|
|
@ -5,11 +5,13 @@ import { FeaturesBentoGrid } from "@/components/homepage/features-bento-grid";
|
|||
import { FeaturesCards } from "@/components/homepage/features-card";
|
||||
import { HeroSection } from "@/components/homepage/hero-section";
|
||||
import ExternalIntegrations from "@/components/homepage/integrations";
|
||||
import { UseCasesGrid } from "@/components/homepage/use-cases-grid";
|
||||
|
||||
export default function HomePage() {
|
||||
return (
|
||||
<main className="min-h-screen bg-gradient-to-b from-gray-50 to-gray-100 text-gray-900 dark:from-black dark:to-gray-900 dark:text-white">
|
||||
<HeroSection />
|
||||
<UseCasesGrid />
|
||||
<FeaturesCards />
|
||||
<FeaturesBentoGrid />
|
||||
<ExternalIntegrations />
|
||||
|
|
|
|||
|
|
@ -33,8 +33,8 @@ import { membersAtom } from "@/atoms/members/members-query.atoms";
|
|||
import { currentUserAtom } from "@/atoms/user/user-query.atoms";
|
||||
import { Thread } from "@/components/assistant-ui/thread";
|
||||
import { ChatHeader } from "@/components/new-chat/chat-header";
|
||||
import { CreateNotionPageToolUI } from "@/components/tool-ui/create-notion-page";
|
||||
import { ReportPanel } from "@/components/report-panel/report-panel";
|
||||
import { CreateNotionPageToolUI } from "@/components/tool-ui/create-notion-page";
|
||||
import type { ThinkingStep } from "@/components/tool-ui/deepagent-thinking";
|
||||
import { DeleteNotionPageToolUI } from "@/components/tool-ui/delete-notion-page";
|
||||
import { DisplayImageToolUI } from "@/components/tool-ui/display-image";
|
||||
|
|
@ -969,36 +969,36 @@ export default function NewChatPage() {
|
|||
contentPartsState.currentTextPartIndex = -1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Merge edited args if present to fix race condition
|
||||
if (decisions.length > 0 && decisions[0].type === "edit" && decisions[0].edited_action) {
|
||||
const editedAction = decisions[0].edited_action;
|
||||
for (const part of contentParts) {
|
||||
if (part.type === "tool-call" && part.toolName === editedAction.name) {
|
||||
part.args = { ...part.args, ...editedAction.args };
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const decisionType = decisions[0]?.type as "approve" | "reject" | undefined;
|
||||
if (decisionType) {
|
||||
for (const part of contentParts) {
|
||||
if (
|
||||
part.type === "tool-call" &&
|
||||
typeof part.result === "object" &&
|
||||
part.result !== null &&
|
||||
"__interrupt__" in (part.result as Record<string, unknown>)
|
||||
) {
|
||||
part.result = {
|
||||
...(part.result as Record<string, unknown>),
|
||||
__decided__: decisionType,
|
||||
};
|
||||
// Merge edited args if present to fix race condition
|
||||
if (decisions.length > 0 && decisions[0].type === "edit" && decisions[0].edited_action) {
|
||||
const editedAction = decisions[0].edited_action;
|
||||
for (const part of contentParts) {
|
||||
if (part.type === "tool-call" && part.toolName === editedAction.name) {
|
||||
part.args = { ...part.args, ...editedAction.args };
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const decisionType = decisions[0]?.type as "approve" | "reject" | undefined;
|
||||
if (decisionType) {
|
||||
for (const part of contentParts) {
|
||||
if (
|
||||
part.type === "tool-call" &&
|
||||
typeof part.result === "object" &&
|
||||
part.result !== null &&
|
||||
"__interrupt__" in (part.result as Record<string, unknown>)
|
||||
) {
|
||||
part.result = {
|
||||
...(part.result as Record<string, unknown>),
|
||||
__decided__: decisionType,
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
const backendUrl = process.env.NEXT_PUBLIC_FASTAPI_BACKEND_URL || "http://localhost:8000";
|
||||
|
|
|
|||
|
|
@ -126,38 +126,39 @@ export const InlineMentionEditor = forwardRef<InlineMentionEditorRef, InlineMent
|
|||
selection?.addRange(range);
|
||||
}, []);
|
||||
|
||||
// Get plain text content (excluding chips)
|
||||
// Get plain text content with inline mention tokens for chips.
|
||||
// This preserves the original query structure sent to the backend/LLM.
|
||||
const getText = useCallback((): string => {
|
||||
if (!editorRef.current) return "";
|
||||
|
||||
let text = "";
|
||||
const walker = document.createTreeWalker(
|
||||
editorRef.current,
|
||||
NodeFilter.SHOW_TEXT | NodeFilter.SHOW_ELEMENT,
|
||||
{
|
||||
acceptNode: (node) => {
|
||||
// Skip chip elements entirely
|
||||
if (node.nodeType === Node.ELEMENT_NODE) {
|
||||
const el = node as Element;
|
||||
if (el.hasAttribute(CHIP_DATA_ATTR)) {
|
||||
return NodeFilter.FILTER_REJECT; // Skip this subtree
|
||||
}
|
||||
return NodeFilter.FILTER_SKIP; // Continue into children
|
||||
}
|
||||
return NodeFilter.FILTER_ACCEPT;
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
let node: Node | null = walker.nextNode();
|
||||
while (node) {
|
||||
const extractText = (node: Node): string => {
|
||||
if (node.nodeType === Node.TEXT_NODE) {
|
||||
text += node.textContent;
|
||||
return node.textContent ?? "";
|
||||
}
|
||||
node = walker.nextNode();
|
||||
}
|
||||
|
||||
return text.trim();
|
||||
if (node.nodeType === Node.ELEMENT_NODE) {
|
||||
const element = node as Element;
|
||||
|
||||
// Preserve mention chips as inline @title tokens.
|
||||
if (element.hasAttribute(CHIP_DATA_ATTR)) {
|
||||
const title = element.querySelector("[data-mention-title='true']")?.textContent?.trim();
|
||||
if (title) {
|
||||
return `@${title}`;
|
||||
}
|
||||
return "";
|
||||
}
|
||||
|
||||
let result = "";
|
||||
for (const child of Array.from(element.childNodes)) {
|
||||
result += extractText(child);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
return "";
|
||||
};
|
||||
|
||||
return extractText(editorRef.current).trim();
|
||||
}, []);
|
||||
|
||||
// Get all mentioned documents
|
||||
|
|
|
|||
|
|
@ -10,9 +10,9 @@ import {
|
|||
} from "@assistant-ui/react-markdown";
|
||||
import { CheckIcon, CopyIcon } from "lucide-react";
|
||||
import { type FC, memo, type ReactNode, useState } from "react";
|
||||
import rehypeKatex from "rehype-katex";
|
||||
import remarkGfm from "remark-gfm";
|
||||
import remarkMath from "remark-math";
|
||||
import rehypeKatex from "rehype-katex";
|
||||
import "katex/dist/katex.min.css";
|
||||
import { InlineCitation } from "@/components/assistant-ui/inline-citation";
|
||||
import { TooltipIconButton } from "@/components/assistant-ui/tooltip-icon-button";
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
"use client";
|
||||
import { useFeatureFlagVariantKey } from "@posthog/react";
|
||||
import { AnimatePresence, motion } from "motion/react";
|
||||
import Image from "next/image";
|
||||
import Link from "next/link";
|
||||
import React, { useEffect, useRef, useState } from "react";
|
||||
import Balancer from "react-wrap-balancer";
|
||||
import { WalkthroughScroll } from "@/components/ui/walkthrough-scroll";
|
||||
import { AUTH_TYPE, BACKEND_URL } from "@/lib/env-config";
|
||||
import { trackLoginAttempt } from "@/lib/posthog/events";
|
||||
import { cn } from "@/lib/utils";
|
||||
|
|
@ -40,41 +40,37 @@ export function HeroSection() {
|
|||
return (
|
||||
<div
|
||||
ref={parentRef}
|
||||
className="relative flex min-h-screen flex-col items-center justify-center overflow-hidden px-4 py-20 md:px-8 md:py-40"
|
||||
className="relative flex min-h-screen flex-col items-center justify-center overflow-hidden px-4 py-12 md:px-8 md:py-24"
|
||||
>
|
||||
<BackgroundGrids />
|
||||
<CollisionMechanism
|
||||
parentRef={parentRef}
|
||||
beamOptions={{
|
||||
initialX: -400,
|
||||
translateX: 600,
|
||||
duration: 7,
|
||||
repeatDelay: 3,
|
||||
}}
|
||||
containerRef={containerRef}
|
||||
parentRef={parentRef}
|
||||
/>
|
||||
<CollisionMechanism
|
||||
parentRef={parentRef}
|
||||
beamOptions={{
|
||||
initialX: -200,
|
||||
translateX: 800,
|
||||
duration: 4,
|
||||
repeatDelay: 3,
|
||||
}}
|
||||
containerRef={containerRef}
|
||||
parentRef={parentRef}
|
||||
/>
|
||||
<CollisionMechanism
|
||||
parentRef={parentRef}
|
||||
beamOptions={{
|
||||
initialX: 200,
|
||||
translateX: 1200,
|
||||
duration: 5,
|
||||
repeatDelay: 3,
|
||||
}}
|
||||
containerRef={containerRef}
|
||||
parentRef={parentRef}
|
||||
/>
|
||||
<CollisionMechanism
|
||||
containerRef={containerRef}
|
||||
parentRef={parentRef}
|
||||
beamOptions={{
|
||||
initialX: 400,
|
||||
|
|
@ -106,34 +102,12 @@ export function HeroSection() {
|
|||
<p className="relative z-50 mx-auto mt-0 max-w-lg px-4 text-center text-base/6 text-gray-600 dark:text-gray-200">
|
||||
Then chat with it in real-time, even alongside your team.
|
||||
</p>
|
||||
<div className="mb-10 mt-8 flex w-full flex-col items-center justify-center gap-4 px-8 sm:flex-row md:mb-20">
|
||||
<div className="mb-6 mt-6 flex w-full flex-col items-center justify-center gap-4 px-8 sm:flex-row md:mb-10">
|
||||
<GetStartedButton />
|
||||
<ContactSalesButton />
|
||||
</div>
|
||||
<div
|
||||
ref={containerRef}
|
||||
className="relative mx-auto max-w-7xl rounded-[32px] border border-neutral-200/50 bg-neutral-100 p-2 backdrop-blur-lg md:p-4 dark:border-neutral-700 dark:bg-neutral-800/50"
|
||||
>
|
||||
<div className="rounded-[24px] border border-neutral-200 bg-white p-2 dark:border-neutral-700 dark:bg-black">
|
||||
{/* Light mode image */}
|
||||
<Image
|
||||
src="/homepage/main_demo.webp"
|
||||
alt="header"
|
||||
width={1920}
|
||||
height={1080}
|
||||
className="rounded-[20px] block dark:hidden"
|
||||
unoptimized
|
||||
/>
|
||||
{/* Dark mode image */}
|
||||
<Image
|
||||
src="/homepage/main_demo.webp"
|
||||
alt="header"
|
||||
width={1920}
|
||||
height={1080}
|
||||
className="rounded-[20px] hidden dark:block"
|
||||
unoptimized
|
||||
/>
|
||||
</div>
|
||||
<div ref={containerRef} className="relative w-full">
|
||||
<WalkthroughScroll />
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
|
@ -236,24 +210,23 @@ const BackgroundGrids = () => {
|
|||
);
|
||||
};
|
||||
|
||||
const CollisionMechanism = React.forwardRef<
|
||||
HTMLDivElement,
|
||||
{
|
||||
containerRef: React.RefObject<HTMLDivElement | null>;
|
||||
parentRef: React.RefObject<HTMLDivElement | null>;
|
||||
beamOptions?: {
|
||||
initialX?: number;
|
||||
translateX?: number;
|
||||
initialY?: number;
|
||||
translateY?: number;
|
||||
rotate?: number;
|
||||
className?: string;
|
||||
duration?: number;
|
||||
delay?: number;
|
||||
repeatDelay?: number;
|
||||
};
|
||||
}
|
||||
>(({ parentRef, containerRef, beamOptions = {} }, ref) => {
|
||||
const CollisionMechanism = ({
|
||||
parentRef,
|
||||
beamOptions = {},
|
||||
}: {
|
||||
parentRef: React.RefObject<HTMLDivElement | null>;
|
||||
beamOptions?: {
|
||||
initialX?: number;
|
||||
translateX?: number;
|
||||
initialY?: number;
|
||||
translateY?: number;
|
||||
rotate?: number;
|
||||
className?: string;
|
||||
duration?: number;
|
||||
delay?: number;
|
||||
repeatDelay?: number;
|
||||
};
|
||||
}) => {
|
||||
const beamRef = useRef<HTMLDivElement>(null);
|
||||
const [collision, setCollision] = useState<{
|
||||
detected: boolean;
|
||||
|
|
@ -264,14 +237,14 @@ const CollisionMechanism = React.forwardRef<
|
|||
|
||||
useEffect(() => {
|
||||
const checkCollision = () => {
|
||||
if (beamRef.current && containerRef.current && parentRef.current && !cycleCollisionDetected) {
|
||||
if (beamRef.current && parentRef.current && !cycleCollisionDetected) {
|
||||
const beamRect = beamRef.current.getBoundingClientRect();
|
||||
const containerRect = containerRef.current.getBoundingClientRect();
|
||||
const parentRect = parentRef.current.getBoundingClientRect();
|
||||
const rightEdge = parentRect.right;
|
||||
|
||||
if (beamRect.bottom >= containerRect.top) {
|
||||
const relativeX = beamRect.left - parentRect.left + beamRect.width / 2;
|
||||
const relativeY = beamRect.bottom - parentRect.top;
|
||||
if (beamRect.right >= rightEdge - 20) {
|
||||
const relativeX = parentRect.width - 20;
|
||||
const relativeY = beamRect.top - parentRect.top + beamRect.height / 2;
|
||||
|
||||
setCollision({
|
||||
detected: true,
|
||||
|
|
@ -288,7 +261,7 @@ const CollisionMechanism = React.forwardRef<
|
|||
const animationInterval = setInterval(checkCollision, 100);
|
||||
|
||||
return () => clearInterval(animationInterval);
|
||||
}, [cycleCollisionDetected, containerRef]);
|
||||
}, [cycleCollisionDetected, parentRef]);
|
||||
|
||||
useEffect(() => {
|
||||
if (collision.detected && collision.coordinates) {
|
||||
|
|
@ -354,9 +327,7 @@ const CollisionMechanism = React.forwardRef<
|
|||
</AnimatePresence>
|
||||
</>
|
||||
);
|
||||
});
|
||||
|
||||
CollisionMechanism.displayName = "CollisionMechanism";
|
||||
};
|
||||
|
||||
const Explosion = ({ ...props }: React.HTMLProps<HTMLDivElement>) => {
|
||||
const spans = Array.from({ length: 20 }, (_, index) => ({
|
||||
|
|
|
|||
107
surfsense_web/components/homepage/use-cases-grid.tsx
Normal file
|
|
@ -0,0 +1,107 @@
|
|||
"use client";
|
||||
|
||||
import { AnimatePresence, motion } from "motion/react";
|
||||
import { ExpandedGifOverlay, useExpandedGif } from "@/components/ui/expanded-gif-overlay";
|
||||
|
||||
const useCases = [
|
||||
{
|
||||
title: "Search & Citation",
|
||||
description: "Ask questions and get Perplexity-style cited responses from your knowledge base.",
|
||||
src: "/homepage/hero_tutorial/BSNCGif.gif",
|
||||
},
|
||||
{
|
||||
title: "Document Mention QNA",
|
||||
description: "Mention specific documents in your queries for targeted answers.",
|
||||
src: "/homepage/hero_tutorial/BQnaGif_compressed.gif",
|
||||
},
|
||||
{
|
||||
title: "Report Generation",
|
||||
description: "Generate and export reports in many formats.",
|
||||
src: "/homepage/hero_tutorial/ReportGenGif_compressed.gif",
|
||||
},
|
||||
{
|
||||
title: "Podcast Generation",
|
||||
description: "Turn your knowledge into podcasts in under 20 seconds.",
|
||||
src: "/homepage/hero_tutorial/PodcastGenGif.gif",
|
||||
},
|
||||
{
|
||||
title: "Image Generation",
|
||||
description: "Generate images directly from your conversations.",
|
||||
src: "/homepage/hero_tutorial/ImageGenGif.gif",
|
||||
},
|
||||
];
|
||||
|
||||
function UseCaseCard({
|
||||
title,
|
||||
description,
|
||||
src,
|
||||
className,
|
||||
}: {
|
||||
title: string;
|
||||
description: string;
|
||||
src: string;
|
||||
className?: string;
|
||||
}) {
|
||||
const { expanded, open, close } = useExpandedGif();
|
||||
|
||||
return (
|
||||
<>
|
||||
<motion.div
|
||||
initial={{ opacity: 0, y: 24 }}
|
||||
whileInView={{ opacity: 1, y: 0 }}
|
||||
viewport={{ once: true, margin: "-60px" }}
|
||||
transition={{ duration: 0.5, ease: "easeOut" }}
|
||||
className={`group overflow-hidden rounded-2xl border border-neutral-200/60 bg-white shadow-sm transition-shadow duration-300 hover:shadow-xl dark:border-neutral-700/60 dark:bg-neutral-900 ${className ?? ""}`}
|
||||
>
|
||||
<div
|
||||
className="cursor-pointer overflow-hidden bg-neutral-50 p-2 dark:bg-neutral-950"
|
||||
onClick={open}
|
||||
>
|
||||
<img
|
||||
src={src}
|
||||
alt={title}
|
||||
className="w-full rounded-xl object-cover transition-transform duration-500 group-hover:scale-[1.02]"
|
||||
/>
|
||||
</div>
|
||||
<div className="px-5 py-4">
|
||||
<h3 className="text-base font-semibold text-neutral-900 dark:text-white">{title}</h3>
|
||||
<p className="mt-1 text-sm text-neutral-500 dark:text-neutral-400">{description}</p>
|
||||
</div>
|
||||
</motion.div>
|
||||
|
||||
<AnimatePresence>
|
||||
{expanded && <ExpandedGifOverlay src={src} alt={title} onClose={close} />}
|
||||
</AnimatePresence>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
export function UseCasesGrid() {
|
||||
return (
|
||||
<section className="relative mx-auto max-w-7xl px-4 py-4 sm:px-6 sm:py-8 lg:px-8">
|
||||
<div className="mb-6 text-center">
|
||||
<h2 className="text-3xl font-semibold tracking-tight text-neutral-900 sm:text-4xl dark:text-white">
|
||||
What You Can Do
|
||||
</h2>
|
||||
</div>
|
||||
|
||||
{/* First row: 2 larger cards */}
|
||||
<div className="grid grid-cols-1 gap-5 md:grid-cols-2">
|
||||
{useCases.slice(0, 2).map((useCase) => (
|
||||
<UseCaseCard key={useCase.title} {...useCase} />
|
||||
))}
|
||||
</div>
|
||||
|
||||
{/* Second row: 3 equal cards */}
|
||||
<div className="mt-5 grid grid-cols-1 gap-5 sm:grid-cols-2 lg:grid-cols-3">
|
||||
{useCases.slice(2).map((useCase) => (
|
||||
<UseCaseCard key={useCase.title} {...useCase} />
|
||||
))}
|
||||
</div>
|
||||
|
||||
<p className="mt-8 text-center text-sm text-neutral-500 dark:text-neutral-400">
|
||||
And more coming soon.
|
||||
</p>
|
||||
</section>
|
||||
);
|
||||
}
|
||||
|
|
@ -68,7 +68,12 @@ interface WarningResult {
|
|||
message?: string;
|
||||
}
|
||||
|
||||
type DeleteNotionPageResult = InterruptResult | SuccessResult | ErrorResult | InfoResult | WarningResult;
|
||||
type DeleteNotionPageResult =
|
||||
| InterruptResult
|
||||
| SuccessResult
|
||||
| ErrorResult
|
||||
| InfoResult
|
||||
| WarningResult;
|
||||
|
||||
function isInterruptResult(result: unknown): result is InterruptResult {
|
||||
return (
|
||||
|
|
@ -341,22 +346,23 @@ function SuccessCard({ result }: { result: SuccessResult }) {
|
|||
</p>
|
||||
</div>
|
||||
</div>
|
||||
{(result.deleted_from_db || result.title) && (
|
||||
<div className="space-y-2 px-4 py-3 text-xs">
|
||||
{result.title && (
|
||||
<div>
|
||||
<span className="font-medium text-muted-foreground">Deleted page: </span>
|
||||
<span>{result.title}</span>
|
||||
</div>
|
||||
)}
|
||||
{result.deleted_from_db && (
|
||||
<div className="pt-1">
|
||||
<span className="text-green-600 dark:text-green-500">
|
||||
✓ Also removed from knowledge base
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</div>)}
|
||||
{(result.deleted_from_db || result.title) && (
|
||||
<div className="space-y-2 px-4 py-3 text-xs">
|
||||
{result.title && (
|
||||
<div>
|
||||
<span className="font-medium text-muted-foreground">Deleted page: </span>
|
||||
<span>{result.title}</span>
|
||||
</div>
|
||||
)}
|
||||
{result.deleted_from_db && (
|
||||
<div className="pt-1">
|
||||
<span className="text-green-600 dark:text-green-500">
|
||||
✓ Also removed from knowledge base
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
|
|
|||
53
surfsense_web/components/ui/expanded-gif-overlay.tsx
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
"use client";
|
||||
|
||||
import { AnimatePresence, motion } from "motion/react";
|
||||
import { useCallback, useEffect, useState } from "react";
|
||||
|
||||
function ExpandedGifOverlay({
|
||||
src,
|
||||
alt,
|
||||
onClose,
|
||||
}: {
|
||||
src: string;
|
||||
alt: string;
|
||||
onClose: () => void;
|
||||
}) {
|
||||
useEffect(() => {
|
||||
const handleKey = (e: KeyboardEvent) => {
|
||||
if (e.key === "Escape") onClose();
|
||||
};
|
||||
document.addEventListener("keydown", handleKey);
|
||||
return () => document.removeEventListener("keydown", handleKey);
|
||||
}, [onClose]);
|
||||
|
||||
return (
|
||||
<motion.div
|
||||
initial={{ opacity: 0 }}
|
||||
animate={{ opacity: 1 }}
|
||||
exit={{ opacity: 0 }}
|
||||
transition={{ duration: 0.2 }}
|
||||
className="fixed inset-0 z-100 flex items-center justify-center bg-black/70 p-4 backdrop-blur-sm sm:p-8"
|
||||
onClick={onClose}
|
||||
>
|
||||
<motion.img
|
||||
initial={{ scale: 0.85, opacity: 0 }}
|
||||
animate={{ scale: 1, opacity: 1 }}
|
||||
exit={{ scale: 0.85, opacity: 0 }}
|
||||
transition={{ duration: 0.25, ease: "easeOut" }}
|
||||
src={src}
|
||||
alt={alt}
|
||||
className="max-h-[90vh] max-w-[90vw] rounded-2xl shadow-2xl"
|
||||
onClick={(e) => e.stopPropagation()}
|
||||
/>
|
||||
</motion.div>
|
||||
);
|
||||
}
|
||||
|
||||
function useExpandedGif() {
|
||||
const [expanded, setExpanded] = useState(false);
|
||||
const open = useCallback(() => setExpanded(true), []);
|
||||
const close = useCallback(() => setExpanded(false), []);
|
||||
return { expanded, open, close };
|
||||
}
|
||||
|
||||
export { ExpandedGifOverlay, useExpandedGif };
|
||||
123
surfsense_web/components/ui/walkthrough-scroll.tsx
Normal file
|
|
@ -0,0 +1,123 @@
|
|||
"use client";
|
||||
|
||||
import { AnimatePresence, motion, useScroll, useTransform } from "motion/react";
|
||||
import { useRef } from "react";
|
||||
import { ExpandedGifOverlay, useExpandedGif } from "@/components/ui/expanded-gif-overlay";
|
||||
|
||||
const walkthroughSteps = [
|
||||
{
|
||||
step: 1,
|
||||
title: "Login",
|
||||
description: "Login to get started.",
|
||||
src: "/homepage/hero_tutorial/LoginFlowGif.gif",
|
||||
},
|
||||
{
|
||||
step: 2,
|
||||
title: "Connect & Sync",
|
||||
description: "Connect your connectors and sync. Enable periodic syncing to keep them updated.",
|
||||
src: "/homepage/hero_tutorial/ConnectorFlowGif.gif",
|
||||
},
|
||||
{
|
||||
step: 3,
|
||||
title: "Upload Documents",
|
||||
description: "While connectors index, upload your documents directly.",
|
||||
src: "/homepage/hero_tutorial/DocUploadGif.gif",
|
||||
},
|
||||
];
|
||||
|
||||
function WalkthroughCard({
|
||||
i,
|
||||
step,
|
||||
title,
|
||||
description,
|
||||
src,
|
||||
progress,
|
||||
range,
|
||||
targetScale,
|
||||
}: {
|
||||
i: number;
|
||||
step: number;
|
||||
title: string;
|
||||
description: string;
|
||||
src: string;
|
||||
progress: ReturnType<typeof useScroll>["scrollYProgress"];
|
||||
range: [number, number];
|
||||
targetScale: number;
|
||||
}) {
|
||||
const container = useRef<HTMLDivElement>(null);
|
||||
const scale = useTransform(progress, range, [1, targetScale]);
|
||||
const { expanded, open, close } = useExpandedGif();
|
||||
|
||||
return (
|
||||
<>
|
||||
<div
|
||||
ref={container}
|
||||
className="sticky top-0 flex items-center justify-center px-4 sm:px-6 lg:px-8"
|
||||
>
|
||||
<motion.div
|
||||
style={{
|
||||
scale,
|
||||
top: `calc(10vh + ${i * 30}px)`,
|
||||
}}
|
||||
className="relative flex origin-top flex-col overflow-hidden rounded-2xl border border-neutral-200/60 bg-white shadow-xl sm:rounded-3xl dark:border-neutral-700/60 dark:bg-neutral-900
|
||||
w-full max-w-[340px] sm:max-w-[520px] md:max-w-[680px] lg:max-w-[900px]"
|
||||
>
|
||||
<div className="flex items-center gap-3 border-b border-neutral-200/60 px-4 py-3 sm:px-6 sm:py-4 dark:border-neutral-700/60">
|
||||
<span className="flex h-7 w-7 shrink-0 items-center justify-center rounded-full bg-neutral-900 text-xs font-semibold text-white sm:h-8 sm:w-8 sm:text-sm dark:bg-white dark:text-neutral-900">
|
||||
{step}
|
||||
</span>
|
||||
<div className="min-w-0">
|
||||
<h3 className="truncate text-sm font-semibold text-neutral-900 sm:text-base dark:text-white">
|
||||
{title}
|
||||
</h3>
|
||||
<p className="hidden text-xs text-neutral-500 sm:block dark:text-neutral-400">
|
||||
{description}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<div
|
||||
className="cursor-pointer bg-neutral-50 p-2 sm:p-3 dark:bg-neutral-950"
|
||||
onClick={open}
|
||||
>
|
||||
<img src={src} alt={title} className="w-full rounded-lg object-cover sm:rounded-xl" />
|
||||
</div>
|
||||
</motion.div>
|
||||
</div>
|
||||
|
||||
<AnimatePresence>
|
||||
{expanded && <ExpandedGifOverlay src={src} alt={title} onClose={close} />}
|
||||
</AnimatePresence>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
function WalkthroughScroll() {
|
||||
const container = useRef<HTMLDivElement>(null);
|
||||
const { scrollYProgress } = useScroll({
|
||||
target: container,
|
||||
offset: ["start start", "end end"],
|
||||
});
|
||||
|
||||
return (
|
||||
<div
|
||||
ref={container}
|
||||
className="relative flex w-full flex-col items-center justify-center pb-[15vh] pt-[1vh] sm:pb-[18vh] sm:pt-[2vh]"
|
||||
>
|
||||
{walkthroughSteps.map((project, i) => {
|
||||
const targetScale = Math.max(0.6, 1 - (walkthroughSteps.length - i - 1) * 0.05);
|
||||
return (
|
||||
<WalkthroughCard
|
||||
key={`walkthrough_${i}`}
|
||||
i={i}
|
||||
{...project}
|
||||
progress={scrollYProgress}
|
||||
range={[i * (1 / walkthroughSteps.length), 1]}
|
||||
targetScale={targetScale}
|
||||
/>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export { WalkthroughScroll, WalkthroughCard };
|
||||
|
|
@ -278,8 +278,9 @@ In a new terminal window, start the Celery worker to handle background tasks:
|
|||
# Make sure you're in the surfsense_backend directory
|
||||
cd surfsense_backend
|
||||
|
||||
# Start Celery worker
|
||||
uv run celery -A celery_worker.celery_app worker --loglevel=info --concurrency=1 --pool=solo
|
||||
# Start Celery worker (consume both default and connectors queues)
|
||||
DEFAULT_Q="${CELERY_TASK_DEFAULT_QUEUE:-surfsense}"
|
||||
uv run celery -A celery_worker.celery_app worker --loglevel=info --concurrency=1 --pool=solo --queues="${DEFAULT_Q},${DEFAULT_Q}.connectors"
|
||||
```
|
||||
|
||||
**If using pip/venv:**
|
||||
|
|
@ -293,8 +294,9 @@ source .venv/bin/activate # Linux/macOS
|
|||
# OR
|
||||
.venv\Scripts\activate # Windows
|
||||
|
||||
# Start Celery worker
|
||||
celery -A celery_worker.celery_app worker --loglevel=info --concurrency=1 --pool=solo
|
||||
# Start Celery worker (consume both default and connectors queues)
|
||||
DEFAULT_Q="${CELERY_TASK_DEFAULT_QUEUE:-surfsense}"
|
||||
celery -A celery_worker.celery_app worker --loglevel=info --concurrency=1 --pool=solo --queues="${DEFAULT_Q},${DEFAULT_Q}.connectors"
|
||||
```
|
||||
|
||||
**Optional: Start Flower for monitoring Celery tasks:**
|
||||
|
|
|
|||
|
|
@ -133,7 +133,10 @@ export const updateConnectorResponse = searchSourceConnector;
|
|||
export const deleteConnectorRequest = searchSourceConnector.pick({ id: true });
|
||||
|
||||
export const deleteConnectorResponse = z.object({
|
||||
message: z.literal("Search source connector deleted successfully"),
|
||||
message: z.string(),
|
||||
status: z.string().optional(),
|
||||
connector_id: z.number().optional(),
|
||||
connector_name: z.string().optional(),
|
||||
});
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -393,11 +393,19 @@ export function useCommentsElectric(threadId: number | null) {
|
|||
}
|
||||
|
||||
if (syncHandleRef.current) {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
try {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
syncHandleRef.current = null;
|
||||
}
|
||||
if (liveQueryRef.current) {
|
||||
liveQueryRef.current.unsubscribe();
|
||||
try {
|
||||
liveQueryRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
liveQueryRef.current = null;
|
||||
}
|
||||
};
|
||||
|
|
|
|||
|
|
@ -180,11 +180,19 @@ export function useConnectorsElectric(searchSpaceId: number | string | null) {
|
|||
syncKeyRef.current = null;
|
||||
|
||||
if (syncHandleRef.current) {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
try {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
syncHandleRef.current = null;
|
||||
}
|
||||
if (liveQueryRef.current) {
|
||||
liveQueryRef.current.unsubscribe();
|
||||
try {
|
||||
liveQueryRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
liveQueryRef.current = null;
|
||||
}
|
||||
};
|
||||
|
|
|
|||
|
|
@ -230,11 +230,19 @@ export function useDocuments(
|
|||
async function setupElectricRealtime() {
|
||||
// Cleanup previous subscriptions
|
||||
if (syncHandleRef.current) {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
try {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
syncHandleRef.current = null;
|
||||
}
|
||||
if (liveQueryRef.current) {
|
||||
liveQueryRef.current.unsubscribe?.();
|
||||
try {
|
||||
liveQueryRef.current.unsubscribe?.();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
liveQueryRef.current = null;
|
||||
}
|
||||
|
||||
|
|
@ -420,11 +428,19 @@ export function useDocuments(
|
|||
return () => {
|
||||
mounted = false;
|
||||
if (syncHandleRef.current) {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
try {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
syncHandleRef.current = null;
|
||||
}
|
||||
if (liveQueryRef.current) {
|
||||
liveQueryRef.current.unsubscribe?.();
|
||||
try {
|
||||
liveQueryRef.current.unsubscribe?.();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
liveQueryRef.current = null;
|
||||
}
|
||||
};
|
||||
|
|
|
|||
|
|
@ -131,7 +131,11 @@ export function useInbox(
|
|||
|
||||
// Clean up previous sync
|
||||
if (syncHandleRef.current) {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
try {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
syncHandleRef.current = null;
|
||||
}
|
||||
|
||||
|
|
@ -174,7 +178,11 @@ export function useInbox(
|
|||
mounted = false;
|
||||
userSyncKeyRef.current = null;
|
||||
if (syncHandleRef.current) {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
try {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
syncHandleRef.current = null;
|
||||
}
|
||||
};
|
||||
|
|
@ -199,7 +207,11 @@ export function useInbox(
|
|||
async function setupLiveQuery() {
|
||||
// Clean up previous live query
|
||||
if (liveQueryRef.current) {
|
||||
liveQueryRef.current.unsubscribe();
|
||||
try {
|
||||
liveQueryRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
liveQueryRef.current = null;
|
||||
}
|
||||
|
||||
|
|
@ -297,7 +309,11 @@ export function useInbox(
|
|||
return () => {
|
||||
mounted = false;
|
||||
if (liveQueryRef.current) {
|
||||
liveQueryRef.current.unsubscribe();
|
||||
try {
|
||||
liveQueryRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
liveQueryRef.current = null;
|
||||
}
|
||||
};
|
||||
|
|
|
|||
|
|
@ -142,11 +142,19 @@ export function useMessagesElectric(
|
|||
syncKeyRef.current = null;
|
||||
|
||||
if (syncHandleRef.current) {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
try {
|
||||
syncHandleRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
syncHandleRef.current = null;
|
||||
}
|
||||
if (liveQueryRef.current) {
|
||||
liveQueryRef.current.unsubscribe();
|
||||
try {
|
||||
liveQueryRef.current.unsubscribe();
|
||||
} catch {
|
||||
// PGlite may already be closed during cleanup
|
||||
}
|
||||
liveQueryRef.current = null;
|
||||
}
|
||||
};
|
||||
|
|
|
|||
|
|
@ -102,6 +102,7 @@
|
|||
"jotai": "^2.15.1",
|
||||
"jotai-tanstack-query": "^0.11.0",
|
||||
"katex": "^0.16.28",
|
||||
"lenis": "^1.3.17",
|
||||
"lowlight": "^3.3.0",
|
||||
"lucide-react": "^0.477.0",
|
||||
"motion": "^12.23.22",
|
||||
|
|
|
|||
21
surfsense_web/pnpm-lock.yaml
generated
|
|
@ -251,6 +251,9 @@ importers:
|
|||
katex:
|
||||
specifier: ^0.16.28
|
||||
version: 0.16.28
|
||||
lenis:
|
||||
specifier: ^1.3.17
|
||||
version: 1.3.17(react@19.2.3)
|
||||
lowlight:
|
||||
specifier: ^3.3.0
|
||||
version: 3.3.0
|
||||
|
|
@ -5873,6 +5876,20 @@ packages:
|
|||
resolution: {integrity: sha512-MbjN408fEndfiQXbFQ1vnd+1NoLDsnQW41410oQBXiyXDMYH5z505juWa4KUE1LqxRC7DgOgZDbKLxHIwm27hA==}
|
||||
engines: {node: '>=0.10'}
|
||||
|
||||
lenis@1.3.17:
|
||||
resolution: {integrity: sha512-k9T9rgcxne49ggJOvXCraWn5dt7u2mO+BNkhyu6yxuEnm9c092kAW5Bus5SO211zUvx7aCCEtzy9UWr0RB+oJw==}
|
||||
peerDependencies:
|
||||
'@nuxt/kit': '>=3.0.0'
|
||||
react: '>=17.0.0'
|
||||
vue: '>=3.0.0'
|
||||
peerDependenciesMeta:
|
||||
'@nuxt/kit':
|
||||
optional: true
|
||||
react:
|
||||
optional: true
|
||||
vue:
|
||||
optional: true
|
||||
|
||||
levn@0.4.1:
|
||||
resolution: {integrity: sha512-+bT2uH4E5LGE7h/n3evcS/sQlJXCpIp6ym8OWJ5eV6+67Dsql/LaaT7qJBAt2rzfoa/5QBGBhxDix1dMt2kQKQ==}
|
||||
engines: {node: '>= 0.8.0'}
|
||||
|
|
@ -13666,6 +13683,10 @@ snapshots:
|
|||
dependencies:
|
||||
language-subtag-registry: 0.3.23
|
||||
|
||||
lenis@1.3.17(react@19.2.3):
|
||||
optionalDependencies:
|
||||
react: 19.2.3
|
||||
|
||||
levn@0.4.1:
|
||||
dependencies:
|
||||
prelude-ls: 1.2.1
|
||||
|
|
|
|||
|
After Width: | Height: | Size: 4.1 MiB |
BIN
surfsense_web/public/homepage/hero_tutorial/BSNCGif.gif
Normal file
|
After Width: | Height: | Size: 8.1 MiB |
BIN
surfsense_web/public/homepage/hero_tutorial/ConnectorFlowGif.gif
Normal file
|
After Width: | Height: | Size: 5.2 MiB |
BIN
surfsense_web/public/homepage/hero_tutorial/DocUploadGif.gif
Normal file
|
After Width: | Height: | Size: 7.2 MiB |
BIN
surfsense_web/public/homepage/hero_tutorial/ImageGenGif.gif
Normal file
|
After Width: | Height: | Size: 1.9 MiB |
BIN
surfsense_web/public/homepage/hero_tutorial/LoginFlowGif.gif
Normal file
|
After Width: | Height: | Size: 4.1 MiB |
BIN
surfsense_web/public/homepage/hero_tutorial/PodcastGenGif.gif
Normal file
|
After Width: | Height: | Size: 3.1 MiB |
|
After Width: | Height: | Size: 6 MiB |
|
Before Width: | Height: | Size: 57 KiB |
|
Before Width: | Height: | Size: 58 KiB |