mirror of
https://github.com/MODSetter/SurfSense.git
synced 2026-05-09 07:42:39 +02:00
commit
d716347531
3 changed files with 14 additions and 6 deletions
18
README.md
18
README.md
|
|
@ -16,6 +16,9 @@ While tools like NotebookLM and Perplexity are impressive and highly effective f
|
|||
|
||||
https://github.com/user-attachments/assets/48142909-6391-4084-b7e8-81da388bb1fc
|
||||
|
||||
# Podcast's
|
||||
|
||||
https://github.com/user-attachments/assets/d516982f-de00-4c41-9e4c-632a7d942f41
|
||||
|
||||
|
||||
|
||||
|
|
@ -36,6 +39,11 @@ Get Cited answers just like Perplexity.
|
|||
Works Flawlessly with Ollama local LLMs.
|
||||
#### 🏠 **Self Hostable**
|
||||
Open source and easy to deploy locally.
|
||||
#### 🎙️ Podcasts
|
||||
- Blazingly fast podcast generation agent. (Creates a 3-minute podcast in under 20 seconds.)
|
||||
- Convert your chat conversations into engaging audio content
|
||||
- Support for multiple TTS providers (OpenAI, Azure, Google Vertex AI)
|
||||
|
||||
#### 📊 **Advanced RAG Techniques**
|
||||
- Supports 150+ LLM's
|
||||
- Supports 6000+ Embedding Models.
|
||||
|
|
@ -58,12 +66,6 @@ Open source and easy to deploy locally.
|
|||
- Its main usecase is to save any webpages protected beyond authentication.
|
||||
|
||||
|
||||
### 2. Temporarily Deprecated
|
||||
|
||||
#### Podcasts
|
||||
- The SurfSense Podcast feature is currently being reworked for better UI and stability. Expect it soon.
|
||||
|
||||
|
||||
## FEATURE REQUESTS AND FUTURE
|
||||
|
||||
|
||||
|
|
@ -104,6 +106,9 @@ Before installation, make sure to complete the [prerequisite setup steps](https:
|
|||
|
||||

|
||||
|
||||
**Podcast Agent**
|
||||

|
||||
|
||||
|
||||
**Agent Chat**
|
||||
|
||||
|
|
@ -115,6 +120,7 @@ Before installation, make sure to complete the [prerequisite setup steps](https:
|
|||
|
||||

|
||||
|
||||
|
||||
## Tech Stack
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -73,6 +73,7 @@ Before you begin, ensure you have:
|
|||
| LONG_CONTEXT_LLM | LiteLLM routed LLM for longer context windows (e.g., `gemini/gemini-2.0-flash`, `ollama/deepseek-r1:8b`) |
|
||||
| UNSTRUCTURED_API_KEY | API key for Unstructured.io service for document parsing |
|
||||
| FIRECRAWL_API_KEY | API key for Firecrawl service for web crawling |
|
||||
| TTS_SERVICE | Text-to-Speech API provider for Podcasts (e.g., `openai/tts-1`, `azure/neural`, `vertex_ai/`). See [supported providers](https://docs.litellm.ai/docs/text_to_speech#supported-providers) |
|
||||
|
||||
Include API keys for the LLM providers you're using. For example:
|
||||
- `OPENAI_API_KEY`: If using OpenAI models
|
||||
|
|
|
|||
|
|
@ -61,6 +61,7 @@ Edit the `.env` file and set the following variables:
|
|||
| LONG_CONTEXT_LLM | LiteLLM routed long-context LLM (e.g., `gemini/gemini-2.0-flash`, `ollama/deepseek-r1:8b`) |
|
||||
| UNSTRUCTURED_API_KEY | API key for Unstructured.io service |
|
||||
| FIRECRAWL_API_KEY | API key for Firecrawl service (if using crawler) |
|
||||
| TTS_SERVICE | Text-to-Speech API provider for Podcasts (e.g., `openai/tts-1`, `azure/neural`, `vertex_ai/`). See [supported providers](https://docs.litellm.ai/docs/text_to_speech#supported-providers) |
|
||||
|
||||
**Important**: Since LLM calls are routed through LiteLLM, include API keys for the LLM providers you're using:
|
||||
- For OpenAI models: `OPENAI_API_KEY`
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue