img installing surfsense on docker surfsense installation

Installing SurfSense on Docker

I set up SurfSense on my lab machine so I could run a self-hosted AI research agent without sending data to third parties. This guide shows the SurfSense installation I used with Docker, and how I got the app running, customised it and fixed the usual snags. I keep it hands-on and command-focused. Expect concrete commands, config pointers and quick checks.

Get Docker running first. Install Docker Engine or Docker Desktop and make sure the daemon is active on your host. I use a named volume so data survives upgrades. For a single-container run the command I use on Linux and macOS. Replace the image tag if you prefer a specific release.
bash
docker run -d -p 3000:3000 -p 8000:8000 \
-v surfsense-data:/data \
–name surfsense \
–restart unless-stopped \
ghcr.io/modsetter/surfsense:latest

On Windows PowerShell the backticks differ. Use this form:
powershell
docker run -d -p 3000:3000 -p 8000:8000
-v surfsense-data:/data

–name surfsense
--restart unless-stopped

ghcr.io/modsetter/surfsense:latest

If you prefer Docker Compose, define the same ports and a named volume. Make sure ports 3000 and 8000 are free. If they clash, change the host side of the mapping, for example 3001:3000.

Once the container starts, access the dashboard at http://localhost:3000. The web UI is where you add data sources, change RBAC settings and attach LLM backends. SurfSense connects to many external sources and supports running local LLMs such as Ollama or vLLM, plus a very large catalogue of embedding models. I link my GitHub and Notion accounts for research material, and I point the agent at a local SearxNG instance for private web search. In practice I enable a single local LLM first, confirm queries return, then add remote connectors. Keep the default data volume for now; that preserves indexes and uploads across container recreates.

Custom settings live in the dashboard and in environment variables if you run with Compose. Common variables are API keys, allowed origins and paths to local model sockets. If you run local LLMs on the same host, point the SurfSense LLM endpoint to the local socket or HTTP port and set model names in the UI. For embeddings, you can use hosted providers or local models; I recommend testing with a small corpus first so you can tweak embedding and reranker settings without reindexing everything. For ingestion, SurfSense accepts many file types. Drop a couple of PDFs and a directory of markdown files, then watch the ingestion logs to see the pipeline process them.

Expect a few problems on first run. If the container exits immediately, check docker logs -f surfsense for stack traces. Common causes are Docker not being allowed to bind ports, SELinux or AppArmor blocking volume writes, or missing environment variables for required connectors. Run docker ps to confirm the container is running. If you see permission errors on /data, stop the container and change the volume ownership or mount a host directory with consistent UID mapping. To upgrade, pull the new image, stop and remove the old container, then recreate it with the same volume name so data stays intact. Example steps:
bash
docker pull ghcr.io/modsetter/surfsense:latest
docker stop surfsense
docker rm surfsense

re-run the docker run command above

Verify operation by opening the dashboard, checking that the health endpoints respond on port 8000 if used, and by running a sample query against the local LLM. If search results look poor, rerun ingestion for the affected documents and check the embedding model selection.

My takeaways: SurfSense installation via Docker is fast and repeatable. Make sure Docker is healthy, reserve ports 3000 and 8000, and use a named volume for persistent data. Start with a single local LLM or a small connector set, confirm queries, then add more sources. That approach saves time and avoids reindexing large corpora when settings change.

Leave a Reply

Your email address will not be published. Required fields are marked *

Prev
Self-hosting a DNS server: step-by-step setup
img self hosting a dns server step by step setup

Self-hosting a DNS server: step-by-step setup

I run a small, boring DNS Server at home

Next
n8n | n8n@2.2.6
n8n n8n2 2 6

n8n | n8n@2.2.6

n8n version 2

You May Also Like