Honestly, Phi-4 is just 8GB, which even my laptop 2070 can do and it’s not that much worse that ChatGPT for most stuff.
Runs pretty well as an AppImage on my linux boxes. On my workstation I run it with a 3090 and the new Gemma LLM has like 20GB and that’s a really good model in my experience so far.
It’s also by microsoft (disclaimer: absolutely fuck microsoft), which owns chatgpt, so makes sense that it’s somewhat capable.
You must log in or register to comment.
Microsoft owns ChatGPT … 🤔🙈
You ever try Alpaca?
https://flathub.org/apps/com.jeffser.Alpaca
Open source alternative to LM Studio for simple chat with local LLMs