feat: new docs

This commit is contained in:
DESKTOP-RTLN3BA\$punk 2025-04-24 01:39:56 -07:00
parent 4be3a811e8
commit da2d606a43
6 changed files with 501 additions and 166 deletions

View file

@ -0,0 +1,158 @@
---
title: Docker Installation
description: Setting up SurfSense using Docker (Recommended)
full: true
---
# Docker Installation (Recommended)
This guide explains how to run SurfSense using Docker Compose, which is the preferred and recommended method for deployment.
## Prerequisites
Before you begin, ensure you have:
- [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/) installed on your machine
- [Git](https://git-scm.com/downloads) (to clone the repository)
- Completed all the [prerequisite setup steps](/docs) including:
- PGVector setup
- Google OAuth configuration
- Unstructured.io API key
- Other required API keys
## Installation Steps
1. **Configure Environment Variables**
Set up the necessary environment variables:
**Linux/macOS:**
```bash
# Copy example environment files
cp surfsense_backend/.env.example surfsense_backend/.env
cp surfsense_web/.env.example surfsense_web/.env
```
**Windows (Command Prompt):**
```cmd
copy surfsense_backend\.env.example surfsense_backend\.env
copy surfsense_web\.env.example surfsense_web\.env
```
**Windows (PowerShell):**
```powershell
Copy-Item -Path surfsense_backend\.env.example -Destination surfsense_backend\.env
Copy-Item -Path surfsense_web\.env.example -Destination surfsense_web\.env
```
Edit both `.env` files and fill in the required values:
**Backend Environment Variables:**
| ENV VARIABLE | DESCRIPTION |
|--------------|-------------|
| DATABASE_URL | PostgreSQL connection string (e.g., `postgresql+asyncpg://postgres:postgres@localhost:5432/surfsense`) |
| SECRET_KEY | JWT Secret key for authentication (should be a secure random string) |
| GOOGLE_OAUTH_CLIENT_ID | Google OAuth client ID obtained from Google Cloud Console |
| GOOGLE_OAUTH_CLIENT_SECRET | Google OAuth client secret obtained from Google Cloud Console |
| NEXT_FRONTEND_URL | URL where your frontend application is hosted (e.g., `http://localhost:3000`) |
| EMBEDDING_MODEL | Name of the embedding model (e.g., `mixedbread-ai/mxbai-embed-large-v1`) |
| RERANKERS_MODEL_NAME | Name of the reranker model (e.g., `ms-marco-MiniLM-L-12-v2`) |
| RERANKERS_MODEL_TYPE | Type of reranker model (e.g., `flashrank`) |
| FAST_LLM | LiteLLM routed smaller, faster LLM (e.g., `openai/gpt-4o-mini`, `ollama/deepseek-r1:8b`) |
| STRATEGIC_LLM | LiteLLM routed advanced LLM for complex tasks (e.g., `openai/gpt-4o`, `ollama/gemma3:12b`) |
| LONG_CONTEXT_LLM | LiteLLM routed LLM for longer context windows (e.g., `gemini/gemini-2.0-flash`, `ollama/deepseek-r1:8b`) |
| UNSTRUCTURED_API_KEY | API key for Unstructured.io service for document parsing |
| FIRECRAWL_API_KEY | API key for Firecrawl service for web crawling |
Include API keys for the LLM providers you're using. For example:
- `OPENAI_API_KEY`: If using OpenAI models
- `GEMINI_API_KEY`: If using Google Gemini models
For other LLM providers, refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/providers).
**Frontend Environment Variables:**
| ENV VARIABLE | DESCRIPTION |
|--------------|-------------|
| NEXT_PUBLIC_FASTAPI_BACKEND_URL | URL of the backend service (e.g., `http://localhost:8000`) |
2. **Build and Start Containers**
Start the Docker containers:
**Linux/macOS/Windows:**
```bash
docker-compose up --build
```
To run in detached mode (in the background):
**Linux/macOS/Windows:**
```bash
docker-compose up -d
```
**Note for Windows users:** If you're using older Docker Desktop versions, you might need to use `docker compose` (with a space) instead of `docker-compose`.
3. **Access the Applications**
Once the containers are running, you can access:
- Frontend: [http://localhost:3000](http://localhost:3000)
- Backend API: [http://localhost:8000](http://localhost:8000)
- API Documentation: [http://localhost:8000/docs](http://localhost:8000/docs)
## Useful Docker Commands
### Container Management
- **Stop containers:**
**Linux/macOS/Windows:**
```bash
docker-compose down
```
- **View logs:**
**Linux/macOS/Windows:**
```bash
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f backend
docker-compose logs -f frontend
docker-compose logs -f db
```
- **Restart a specific service:**
**Linux/macOS/Windows:**
```bash
docker-compose restart backend
```
- **Execute commands in a running container:**
**Linux/macOS/Windows:**
```bash
# Backend
docker-compose exec backend python -m pytest
# Frontend
docker-compose exec frontend pnpm lint
```
## Troubleshooting
- **Linux/macOS:** If you encounter permission errors, you may need to run the docker commands with `sudo`.
- **Windows:** If you see access denied errors, make sure you're running Command Prompt or PowerShell as Administrator.
- If ports are already in use, modify the port mappings in the `docker-compose.yml` file.
- For backend dependency issues, check the `Dockerfile` in the backend directory.
- For frontend dependency issues, check the `Dockerfile` in the frontend directory.
- **Windows-specific:** If you encounter line ending issues (CRLF vs LF), configure Git to handle line endings properly with `git config --global core.autocrlf true` before cloning the repository.
## Next Steps
Once your installation is complete, you can start using SurfSense! Navigate to the frontend URL and log in using your Google account.

View file

@ -0,0 +1,21 @@
---
title: Installation
description: Current ways to use SurfSense
full: true
---
# Installing SurfSense
There are two ways to install and use SurfSense:
## Docker Installation (Preferred)
The recommended way to install SurfSense is using Docker. This method provides a containerized environment with all dependencies pre-configured.
[Learn more about Docker installation](/docs/docker-installation)
## Manual Installation
For users who prefer more control over the installation process or need to customize their setup, we also provide manual installation instructions.
[Learn more about Manual installation](/docs/manual-installation)

View file

@ -0,0 +1,258 @@
---
title: Manual Installation
description: Setting up SurfSense manually for customized deployments
full: true
---
# Manual Installation
This guide provides step-by-step instructions for setting up SurfSense without Docker. This approach gives you more control over the installation process and allows for customization of the environment.
## Prerequisites
Before beginning the manual installation, ensure you have completed all the [prerequisite setup steps](/docs), including:
- PGVector installation
- Google OAuth setup
- Unstructured.io API key
- LLM observability (optional)
- Crawler setup (if needed)
## Backend Setup
The backend is the core of SurfSense. Follow these steps to set it up:
### 1. Environment Configuration
First, create and configure your environment variables by copying the example file:
**Linux/macOS:**
```bash
cd surfsense_backend
cp .env.example .env
```
**Windows (Command Prompt):**
```cmd
cd surfsense_backend
copy .env.example .env
```
**Windows (PowerShell):**
```powershell
cd surfsense_backend
Copy-Item -Path .env.example -Destination .env
```
Edit the `.env` file and set the following variables:
| ENV VARIABLE | DESCRIPTION |
|--------------|-------------|
| DATABASE_URL | PostgreSQL connection string (e.g., `postgresql+asyncpg://postgres:postgres@localhost:5432/surfsense`) |
| SECRET_KEY | JWT Secret key for authentication (should be a secure random string) |
| GOOGLE_OAUTH_CLIENT_ID | Google OAuth client ID |
| GOOGLE_OAUTH_CLIENT_SECRET | Google OAuth client secret |
| NEXT_FRONTEND_URL | Frontend application URL (e.g., `http://localhost:3000`) |
| EMBEDDING_MODEL | Name of the embedding model (e.g., `mixedbread-ai/mxbai-embed-large-v1`) |
| RERANKERS_MODEL_NAME | Name of the reranker model (e.g., `ms-marco-MiniLM-L-12-v2`) |
| RERANKERS_MODEL_TYPE | Type of reranker model (e.g., `flashrank`) |
| FAST_LLM | LiteLLM routed faster LLM (e.g., `openai/gpt-4o-mini`, `ollama/deepseek-r1:8b`) |
| STRATEGIC_LLM | LiteLLM routed advanced LLM (e.g., `openai/gpt-4o`, `ollama/gemma3:12b`) |
| LONG_CONTEXT_LLM | LiteLLM routed long-context LLM (e.g., `gemini/gemini-2.0-flash`, `ollama/deepseek-r1:8b`) |
| UNSTRUCTURED_API_KEY | API key for Unstructured.io service |
| FIRECRAWL_API_KEY | API key for Firecrawl service (if using crawler) |
**Important**: Since LLM calls are routed through LiteLLM, include API keys for the LLM providers you're using:
- For OpenAI models: `OPENAI_API_KEY`
- For Google Gemini models: `GEMINI_API_KEY`
- For other providers, refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/providers)
### 2. Install Dependencies
Install the backend dependencies using `uv`:
**Linux/macOS:**
```bash
# Install uv if you don't have it
curl -fsSL https://astral.sh/uv/install.sh | bash
# Install dependencies
uv sync
```
**Windows (PowerShell):**
```powershell
# Install uv if you don't have it
iwr -useb https://astral.sh/uv/install.ps1 | iex
# Install dependencies
uv sync
```
**Windows (Command Prompt):**
```cmd
# Install dependencies with uv (after installing uv)
uv sync
```
### 3. Run the Backend
Start the backend server:
**Linux/macOS/Windows:**
```bash
# Run without hot reloading
uv run main.py
# Or with hot reloading for development
uv run main.py --reload
```
If everything is set up correctly, you should see output indicating the server is running on `http://localhost:8000`.
## Frontend Setup
### 1. Environment Configuration
Set up the frontend environment:
**Linux/macOS:**
```bash
cd surfsense_web
cp .env.example .env
```
**Windows (Command Prompt):**
```cmd
cd surfsense_web
copy .env.example .env
```
**Windows (PowerShell):**
```powershell
cd surfsense_web
Copy-Item -Path .env.example -Destination .env
```
Edit the `.env` file and set:
| ENV VARIABLE | DESCRIPTION |
|--------------|-------------|
| NEXT_PUBLIC_FASTAPI_BACKEND_URL | Backend URL (e.g., `http://localhost:8000`) |
### 2. Install Dependencies
Install the frontend dependencies:
**Linux/macOS:**
```bash
# Install pnpm if you don't have it
npm install -g pnpm
# Install dependencies
pnpm install
```
**Windows:**
```powershell
# Install pnpm if you don't have it
npm install -g pnpm
# Install dependencies
pnpm install
```
### 3. Run the Frontend
Start the Next.js development server:
**Linux/macOS/Windows:**
```bash
pnpm run dev
```
The frontend should now be running at `http://localhost:3000`.
## Browser Extension Setup (Optional)
The SurfSense browser extension allows you to save any webpage, including those protected behind authentication.
### 1. Environment Configuration
**Linux/macOS:**
```bash
cd surfsense_browser_extension
cp .env.example .env
```
**Windows (Command Prompt):**
```cmd
cd surfsense_browser_extension
copy .env.example .env
```
**Windows (PowerShell):**
```powershell
cd surfsense_browser_extension
Copy-Item -Path .env.example -Destination .env
```
Edit the `.env` file:
| ENV VARIABLE | DESCRIPTION |
|--------------|-------------|
| PLASMO_PUBLIC_BACKEND_URL | SurfSense Backend URL (e.g., `http://127.0.0.1:8000`) |
### 2. Build the Extension
Build the extension for your browser using the [Plasmo framework](https://docs.plasmo.com/framework/workflows/build#with-a-specific-target).
**Linux/macOS/Windows:**
```bash
# Install dependencies
pnpm install
# Build for Chrome (default)
pnpm build
# Or for other browsers
pnpm build --target=firefox
pnpm build --target=edge
```
### 3. Load the Extension
Load the extension in your browser's developer mode and configure it with your SurfSense API key.
## Verification
To verify your installation:
1. Open your browser and navigate to `http://localhost:3000`
2. Sign in with your Google account
3. Create a search space and try uploading a document
4. Test the chat functionality with your uploaded content
## Troubleshooting
- **Database Connection Issues**: Verify your PostgreSQL server is running and pgvector is properly installed
- **Authentication Problems**: Check your Google OAuth configuration and ensure redirect URIs are set correctly
- **LLM Errors**: Confirm your LLM API keys are valid and the selected models are accessible
- **File Upload Failures**: Validate your Unstructured.io API key
- **Windows-specific**: If you encounter path issues, ensure you're using the correct path separator (`\` instead of `/`)
- **macOS-specific**: If you encounter permission issues, you may need to use `sudo` for some installation commands
## Next Steps
Now that you have SurfSense running locally, you can explore its features:
- Create search spaces for organizing your content
- Upload documents or use the browser extension to save webpages
- Ask questions about your saved content
- Explore the advanced RAG capabilities
For production deployments, consider setting up:
- A reverse proxy like Nginx
- SSL certificates for secure connections
- Proper database backups
- User access controls

View file

@ -4,6 +4,9 @@
"root": true,
"pages": [
"---Setup---",
"index"
"index",
"installation",
"docker-installation",
"manual-installation"
]
}