In my experience it depends on the math. Every model seems to have different strengths based on a wide berth of prompts and information.
In my experience it depends on the math. Every model seems to have different strengths based on a wide berth of prompts and information.
+1 for Mistral, they were the first (or one of the first) Apache open source licensed models. I run Mistral-7B and variant fine tunes locally, and they’ve always been really high quality overall. Mistral-Medium packed a punch (mid-size obviously) but it definitely competes with the big ones at least.
Heat guns are what I use to loosen things up.
You got me really curious so I prompted ChatGPT to give me a redditor style comment that’s anti-Trump followed by another far-left redditor’s reply to it and this is what it gave me:
Original Comment (Left-Leaning Anti-Trump Perspective):
“It’s honestly mind-blowing how Trump still has a cult following after everything. The man was impeached twice, incited an insurrection, and openly grifts his supporters with fake fundraising schemes. And yet, somehow, people still think he’s their ‘savior.’ The cognitive dissonance is off the charts.”
Far-Left Redditor Response:
“Because the system is fundamentally broken. Liberals act like voting blue is enough when the entire establishment—from corporate Dems to the GOP—is complicit in maintaining capitalism’s stranglehold. Trump is just a symptom of the rot. The real solution isn’t ‘vote harder’; it’s dismantling the system that enabled him in the first place.”
^ – this seems to be disturbingly accurate, I can only imagine this being produced at automated scale at this point.
Yep, someone in another sub once said “I hope your plant dies” and their whole account was permbanned for “inciting violence”. Against a plant. Hysterical. That site is cooked.
From what I’ve seen on the reddit right-leaning subs most of them are very against the entire war and sending money/troops at all.
https://ollama.ai/, this is what I’ve been using for over a year now, new models come out regularly and you just “ollama pull <model ID>” and then it’s available to run locally. Then you can use docker to run https://www.openwebui.com/ locally, giving it a ChatGPT-style interface (but even better and more configurable and you can run prompts against any number of models you select at once.)
All free and available to everyone.