diff --git a/README.md b/README.md index d2af00f33..80c2587da 100644 --- a/README.md +++ b/README.md @@ -16,6 +16,9 @@ While tools like NotebookLM and Perplexity are impressive and highly effective f https://github.com/user-attachments/assets/48142909-6391-4084-b7e8-81da388bb1fc +# Podcast's + +https://github.com/user-attachments/assets/d516982f-de00-4c41-9e4c-632a7d942f41 @@ -36,6 +39,11 @@ Get Cited answers just like Perplexity. Works Flawlessly with Ollama local LLMs. #### 🏠 **Self Hostable** Open source and easy to deploy locally. +#### 🎙️ Podcasts +- Blazingly fast podcast generation agent. (Creates a 3-minute podcast in under 20 seconds.) +- Convert your chat conversations into engaging audio content +- Support for multiple TTS providers (OpenAI, Azure, Google Vertex AI) + #### 📊 **Advanced RAG Techniques** - Supports 150+ LLM's - Supports 6000+ Embedding Models. @@ -58,12 +66,6 @@ Open source and easy to deploy locally. - Its main usecase is to save any webpages protected beyond authentication. -### 2. Temporarily Deprecated - -#### Podcasts -- The SurfSense Podcast feature is currently being reworked for better UI and stability. Expect it soon. - - ## FEATURE REQUESTS AND FUTURE @@ -104,6 +106,9 @@ Before installation, make sure to complete the [prerequisite setup steps](https: ![researcher](https://github.com/user-attachments/assets/fda3e61f-f936-4b66-b565-d84edde44a67) +**Podcast Agent** +![podcasts](https://github.com/user-attachments/assets/6cb82ffd-9e14-4172-bc79-67faf34c4c1c) + **Agent Chat** @@ -115,6 +120,7 @@ Before installation, make sure to complete the [prerequisite setup steps](https: ![ext2](https://github.com/user-attachments/assets/a9b9f1aa-2677-404d-b0a0-c1b2dddf24a7) + ## Tech Stack diff --git a/surfsense_web/content/docs/docker-installation.mdx b/surfsense_web/content/docs/docker-installation.mdx index 236366546..47053c915 100644 --- a/surfsense_web/content/docs/docker-installation.mdx +++ b/surfsense_web/content/docs/docker-installation.mdx @@ -73,6 +73,7 @@ Before you begin, ensure you have: | LONG_CONTEXT_LLM | LiteLLM routed LLM for longer context windows (e.g., `gemini/gemini-2.0-flash`, `ollama/deepseek-r1:8b`) | | UNSTRUCTURED_API_KEY | API key for Unstructured.io service for document parsing | | FIRECRAWL_API_KEY | API key for Firecrawl service for web crawling | + | TTS_SERVICE | Text-to-Speech API provider for Podcasts (e.g., `openai/tts-1`, `azure/neural`, `vertex_ai/`). See [supported providers](https://docs.litellm.ai/docs/text_to_speech#supported-providers) | Include API keys for the LLM providers you're using. For example: - `OPENAI_API_KEY`: If using OpenAI models diff --git a/surfsense_web/content/docs/manual-installation.mdx b/surfsense_web/content/docs/manual-installation.mdx index 3813b1b88..b1fed6aa4 100644 --- a/surfsense_web/content/docs/manual-installation.mdx +++ b/surfsense_web/content/docs/manual-installation.mdx @@ -61,6 +61,7 @@ Edit the `.env` file and set the following variables: | LONG_CONTEXT_LLM | LiteLLM routed long-context LLM (e.g., `gemini/gemini-2.0-flash`, `ollama/deepseek-r1:8b`) | | UNSTRUCTURED_API_KEY | API key for Unstructured.io service | | FIRECRAWL_API_KEY | API key for Firecrawl service (if using crawler) | +| TTS_SERVICE | Text-to-Speech API provider for Podcasts (e.g., `openai/tts-1`, `azure/neural`, `vertex_ai/`). See [supported providers](https://docs.litellm.ai/docs/text_to_speech#supported-providers) | **Important**: Since LLM calls are routed through LiteLLM, include API keys for the LLM providers you're using: - For OpenAI models: `OPENAI_API_KEY`