

Can’t it source other LLM outputs as “verified source” and thus still say whatever sounds good, like any LLM?
No. The footer tells you what the source is. Anything the model generates on its own is confidence: unverified | source: model - explicitly flagged by default. To get to source: docs or source: scratchpad, it needs direct, traceable, human-originated provenance. You control what goes in. The FAQ outlines the sources and strength rankings; it’s not vibes.
Providing “technical” verification, e.g. SHA, gives no insurance about the content itself being from a reputable source.
SHA verifies the document hasn’t been altered since it entered your stack. Source quality is your call. GIGO is always an issue, but if you scope the source correctly it won’t drift. And if it does, you’ll know, because the footer tells you exactly where the answer came from.
The cheatsheet system is the clearest example of how this works in practice: you define terms once in a JSONL file, the model pegs its reasoning to your definition forever. It can’t revert to something you didn’t teach it. That fingerprint is over everything.
… the user STILL has to verify that whatever is provided is coherent and a third party is actually a good source.
Yes, deliberately. That’s a feature.
Like I said, most LLM tools are trying to replace your thinking, this one isn’t. The human stays in the loop. The model’s limitations are visible. You decide what to trust. Maybe that’s enough, maybe it isn’t.
EDIT: giant wall of text. See - https://codeberg.org/BobbyLLM/llama-conductor#some-problems-this-solves




You’re welcome. Hope it makes sense. If not, you can marvel at the (many, many) nestled swears in my -commit messages.