Local setup
StockSage runs on your machine. You supply LLM access via Ollama (no API key) or a cloud provider (API key).
Requirements
- Python 3.13+ (see
pyproject.toml) - Git
Optional:
Install
Without uv:
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -e ".[dev]"
Environment
Edit .env: set LLM_MODEL and keys per model-providers.md.
Validate:
Or: python -m src.core.config.check
Run
Or: uv run uvicorn src.app.main:app --reload
Open http://127.0.0.1:8000.
Platform notes
macOS / Linux
Use source .venv/bin/activate as above.
Windows
Use PowerShell:
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -e ".[dev]"
uvicorn src.app.main:app --reload
Ollama
- Install and start Ollama.
- Pull a model:
ollama pull qwen2.5:14b-instruct(or another tag you set inLLM_MODEL). - Set
LLM_MODEL=ollama/qwen2.5:14b-instructin.env.
Jupyter (optional)
Not required for the web UI:
Troubleshooting
| Symptom | Fix |
|---|---|
ModuleNotFoundError |
Run from repo root; ensure venv activated and pip install -e . or uv sync. |
| Ollama errors | Run make check; confirm ollama serve and model pulled. |
| API errors | Verify provider env vars; see model-providers.md. |
| Heavy RAM use | Use a smaller Ollama model or cloud API; close other apps. |
UI error decision tree
If you see frequent UI stream errors, run this sequence:
- Validate configuration:
- Confirm active model settings:
- If using Ollama, verify daemon + model tag:
- If UI shows
Reference: <id>, check server logs for that reference ID; browser errors are sanitized by design. - If Ollama is unstable, switch to a known-good cloud model in
.envtemporarily to isolate transport vs model/runtime problems.