Rıdvan Tülünay (TulunaY)Step-by-step guide: install Ollama, download an AI model, and connect it to LivChart for local AI-powered dashboards.
Running AI models locally used to be complex. With Ollama, it's a few terminal commands.
In this guide, I'll walk through installing Ollama, downloading a business-capable AI model, and connecting it to LivChart for AI-powered dashboard generation — all running locally, no cloud dependency.
macOS:
brew install ollama
Linux:
curl -fsSL https://ollama.com/install.sh | sh
Windows: Download from ollama.com
After installation, start the Ollama server:
ollama serve
For business analytics, I recommend starting with Qwen2.5 7B — it handles multilingual prompts well (including Turkish) and performs reliably for chart generation.
ollama pull qwen2.5:7b
This downloads approximately 4.7 GB. Other good options:
| Model | Size | Best For | RAM Required |
|---|---|---|---|
| Qwen2.5 7B | 4.7 GB | Multilingual analytics | 8 GB |
| Llama 3.1 8B | 4.9 GB | High-accuracy charts | 16 GB |
| Gemma 4 E2B | 1.6 GB | Fast interactive use | 8 GB |
| Mistral 7B | 4.1 GB | Lightweight deployment | 8 GB |
ollama run qwen2.5:7b
Try a business prompt:
Show me a bar chart of monthly revenue by region for Q1 2026
If the model responds with structured output, it's working.
http://localhost:11434 (default Ollama port)qwen2.5:7b
If the connection succeeds, you're ready to use AI-powered chart generation.
"Connection refused" — Make sure Ollama is running (ollama serve). Check that port 11434 is not blocked by firewall.
"Model not found" — Run ollama list to see downloaded models. If empty, pull a model first.
"Slow responses" — Try a smaller model (Gemma 4 E2B) or enable GPU acceleration. On Apple Silicon, Ollama uses Metal by default.
"Incorrect charts" — Try rephrasing your prompt with more specific terms. Include chart type explicitly: "Create a line chart showing..."
| Setup | RAM | GPU | Models |
|---|---|---|---|
| Entry-level | 16 GB | Not required | Gemma 4 E2B, Mistral 7B |
| Mid-range | 32 GB | Recommended | Qwen2.5 7B, Llama 3.1 8B |
| Enterprise | 64 GB+ | Multi-GPU | Large models, concurrent users |
The setup takes under 5 minutes. The benefits last much longer.
Try it yourself — download LivChart and connect Ollama for local AI-powered analytics.