Been running n8n with Ollama for a few months now for work automation. Wanted to share what I’ve learned since it’s not super well-documented.
The setup is just Docker Compose with n8n + Ollama + Postgres. n8n’s HTTP Request node talks directly to Ollama’s REST API — no custom nodes needed.
What I’m running:
- Email digest every morning (IMAP → Ollama → Slack)
- Document summarization (PDF watcher → Ollama → notes)
- Lead scoring from form webhooks
Zero API costs, everything stays on my server. If anyone wants the workflow templates I have a pack: https://workflows.neatbites.com/
Happy to answer questions about the setup.


I was playing with ministral-3 3b on a 3060. It loads pretty quick, but response generation is a bit slow. It starts responding nearly instantly once the model is loaded (which is also quick), but for long responses (~5 paragraphs) it may take 15-20 seconds for the whole thing.
Cries in 1070
I’d still give it a shot. A quick check of benchmarks suggests it’s not that much slower. I don’t know if that extends to ML computation though.