Skip to main content

Technical FAQ

Ollama connection fails

Verify Ollama is running: curl http://localhost:11434/api/tags. If running on a different host, update the Ollama URL in Settings → Avsel IA. Ensure firewall allows connections on port 11434.

Model download is slow or fails

Large models (14B+) require 10-20GB download. Use a stable connection. If interrupted, run ollama pull model_name again — it resumes from where it stopped.

GPU not detected

Ensure NVIDIA drivers are installed: nvidia-smi should show your GPU. Ollama auto-detects CUDA. If using Docker, add --gpus all to the container.

Slow responses without GPU

CPU inference is 5-10x slower than GPU. Use a smaller model (qwen2.5:7b) or switch to a cloud provider (OpenAI/Gemini) for faster responses.

Multi-company setup

Avsel IA respects Odoo's multi-company rules. Each company can have its own LLM configuration. Queries are automatically filtered by the active company.

pgvector not available

The RAG module requires pgvector. Install it:

sudo apt install postgresql-14-pgvector
psql -d your_db -c "CREATE EXTENSION vector;"

How to update modules

Download the latest version, upload via Apps → Upload Module, then click "Upgrade" on the module. Your data and configuration are preserved.