You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
3.9 KiB
3.9 KiB
Setup Guide
Prerequisites
- Docker & Docker Compose v2.20+
- 16 GB RAM recommended (for local LLMs)
- GPU optional (CPU inference works but is slower)
- A Markdown vault (Obsidian/Logseq compatible directory)
Quick Start
1. Clone and configure
git clone <repo-url> second-brain
cd second-brain
cp .env.example .env
# Edit .env — at minimum, set POSTGRES_PASSWORD
2. Place your vault
Copy your Markdown notes into ./vault/, or mount your existing Obsidian/Logseq vault:
# Option A: copy files
cp -r ~/obsidian-vault/* ./vault/
# Option B: symlink (Linux/macOS)
ln -s ~/obsidian-vault ./vault
The vault directory structure is preserved — subfolders become part of the document path.
3. Start services
docker compose up -d
This starts:
- PostgreSQL with pgvector (port 5432)
- Redis (port 6379)
- Ollama (port 11434)
- RAG API (port 8000)
- Ingestion Worker (background)
- AI Agents (background)
- Web UI (port 3000)
4. Wait for model download
Ollama pulls the embedding and chat models on first boot. This may take several minutes.
# Watch the bootstrap container logs
docker compose logs -f ollama-bootstrap
5. Check the UI
Open http://localhost:3000 in your browser.
Service Ports
| Service | Port | URL |
|---|---|---|
| Web UI | 3000 | http://localhost:3000 |
| RAG API | 8000 | http://localhost:8000 |
| API Docs | 8000 | http://localhost:8000/docs |
| Ollama | 11434 | http://localhost:11434 |
| PostgreSQL | 5432 | localhost:5432 |
Configuration
All configuration is in .env. Key settings:
| Variable | Default | Description |
|---|---|---|
CHAT_MODEL |
mistral |
Ollama model for chat |
EMBEDDING_MODEL |
nomic-embed-text |
Ollama model for embeddings |
CHUNK_SIZE |
700 |
Target tokens per chunk |
SEARCH_THRESHOLD |
0.65 |
Minimum similarity score (0–1) |
AUTO_TAG |
true |
Enable LLM-based auto-tagging |
AUTO_SUMMARIZE |
true |
Enable LLM-based auto-summarization |
Switching LLM Models
The system is model-agnostic. To use a different model:
# Pull the model
docker compose exec ollama ollama pull llama3
# Update .env
CHAT_MODEL=llama3
# Restart the affected services
docker compose restart rag-api agents
Popular model choices:
mistral— fast, good quality (7B)llama3— excellent quality (8B/70B)phi3— lightweight, efficient (3.8B)qwen2— strong multilingual support
Re-indexing the Vault
The ingestion worker automatically re-indexes changed files. To force a full re-index:
curl -X POST http://localhost:8000/api/v1/index/reindex \
-H "Content-Type: application/json" \
-d '{"force": true}'
Backup
# Backup database
docker compose exec postgres pg_dump -U brain second_brain > backup.sql
# Restore
docker compose exec -T postgres psql -U brain second_brain < backup.sql
The vault itself is just files — back it up with any file backup tool.
Stopping / Resetting
# Stop all services (preserve data)
docker compose down
# Full reset (DELETE all data!)
docker compose down -v
Obsidian Compatibility
The vault is fully compatible with Obsidian. You can:
- Open
./vault/directly in Obsidian - Use all Obsidian features (graph view, backlinks, templates, etc.)
- The system reads
[[WikiLinks]],#tags, and YAML frontmatter
Logseq Compatibility
Point Logseq's graph folder to ./vault/. The system handles:
[[Page references]]#tagsin journal and pages- YAML frontmatter (or Logseq's
::properties are stored as-is)