Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

I tried to make a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don’t GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can’t draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

  • rozodru@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    soooo if it doesn’t know something it won’t say anything and if it does know something it’ll show sources…so essentially you plug this into Claude it’s just never going to say anything to you ever again?

    neat.

      • rozodru@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        don’t get me wrong I love what you’ve built and it IS something that is sorely needed. I just find it funny that because of this you’ve pretty much made something like Claude just completely shut up. You’ve pretty much showed off the extremely sad state of Anthropic.

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          ·
          4 days ago

          I haven’t tried wiring it up to Claude, that might be fun.

          Claude had done alright by me :) Swears a lot, helps me fix code (honestly, I have no idea where he gets that from… :P). Expensive tho.

          Now ChatGPT… well… Gippity being Gippity is the reason llama-conductor exists in the first place.

          Anyway, I just added some OCR stuff into the router. So now, you can drop in a screenshot and get it to mull over that, or extract text directly from images etc.

          I have a few other little side-cars I’m thinking of adding of the next few months, based on what folks here have mentioned

          !!LIST command (list all stored in vodka memories)

          !! FLUSH (flush rolling chat summary)

          >> RAW (keep all the router mechanics but remove presentation/polish prompts and just raw dog it.

          >> JSON Schema + Validity Verifier

          >> CALC (math, unit conversion, percentages, timestamps, sizes etc)

          >> FIND (Pulls IPs, emails, URLs, hashes, IDs, etc from documents and returns exact structured output)

          I’m open to other suggestions / ideas.

          PS: It’s astonishing to me (and I built it!) just how FAST .py commands run. Basically instantaneous. So, I’m all for adding a few more “useful” cheat-codes like this.

  • floquant@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    7 days ago

    Holy shit I’m glad to be on the autistic side of the internet.

    Thank you for proving that fucking JSON text files are all you need and not “just a couple billion more parameters bro”

    Awesome work, all the kudos.

  • recklessengagement@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    6 days ago

    I strongly feel that the best way to improve the useability of LLMs is through better human-written tooling/software. Unfortunately most of the people promoting LLMs are tools themselves and all their software is vibe-coded.

    Thank you for this. I will test it on my local install this weekend.

  • termaxima@slrpnk.net
    link
    fedilink
    arrow-up
    9
    ·
    7 days ago

    Hallucination is mathematically proven to be unsolvable with LLMs. I don’t deny this may have drastically reduced it, or not, I have no idea.

    But hallucinations will just always be there as long as we use LLMs.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      6 days ago

      Agree-ish

      Hallucination is inherent to unconstrained generative models: if you ask them to fill gaps, they will. I don’t know how to “solve” that at the model level.

      What you can do is make “I don’t know” an enforced output, via constraints outside the model.

      My claim isn’t “LLMs won’t hallucinate.” It’s “the system won’t silently propagate hallucinations.” Grounding + refusal + provenance live outside the LLM, so the failure mode becomes “no supported answer” instead of “confident, slick lies.”

      So yeah: generation will always be fuzzy. Workflow-level determinism doesn’t have to be.

      I tried yelling, shouting, and even percussive maintenance but the stochastic parrot still insisted “gottle of geer” was the correct response.

  • Zexks@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    7 days ago

    This is awesome. Ive been working on something similar. Youre not likely to get much useful from here though. Anything AI is by default bad here

  • PolarKraken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    This sounds really interesting, I’m looking forward to reading the comments here in detail and looking at the project, might even end up incorporating it into my own!

    I’m working on something that addresses the same problem in a different way, the problem of constraining or delineating the specifically non-deterministic behavior one wants to involve in a complex workflow. Your approach is interesting and has a lot of conceptual overlap with mine, regarding things like strictly defining compliance criteria and rejecting noncompliant outputs, and chaining discrete steps into a packaged kind of “super step” that integrates non-deterministic substeps into a somewhat more deterministic output, etc.

    How involved was it to build it to comply with the OpenAI API format? I haven’t looked into that myself but may.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      6 days ago

      Cheers!

      Re: OpenAI API format: 3.6 - not great, not terrible :)

      In practice I only had to implement a thin subset: POST /v1/chat/completions + GET /v1/models (most UIs just need those). The payload is basically {model, messages, temperature, stream…} and you return a choices[] with an assistant message. The annoying bits are the edge cases: streaming/SSE if you want it, matching the error shapes UIs expect, and being consistent about model IDs so clients don’t scream “model not found”. Which is actually a bug I still need to squash some more for OWUI 0.7.2. It likes to have its little conniptions.

      But TL;DR: more plumbing than rocket science. The real pain was sitting down with pen and paper and drawing what went where and what wasn’t allowed to do what. Because I knew I’d eventually fuck something up (I did, many times), I needed a thing that told me “no, that’s not what this is designed to do. Do not pass go. Do not collect $200”.

      shrug I tried.

      • PolarKraken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        The very hardest part of designing software, and especially designing abstractions that aim to streamline use of other tools, is deciding exactly where you draw the line(s) between intended flexibility (user should be able and find it easy to do what they want), and opinionated “do it my way here, and I’ll constrain options for doing otherwise”.

        You have very clear and thoughtful lines drawn here, about where the flexibility starts and ends, and where the opinionated “this is the point of the package/approach, so do it this way” parts are, too.

        Sincerely that’s a big compliment and something I see as a strong signal about your software design instincts. Well done! (I haven’t played with it yet, to be clear, lol)

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 days ago

          Thank you for saying that and for noticing it! Seeing you were kind enough to say that, I’d like to say a few things about how/why I made this stupid thing. It might be of interest to people. Or not LOL.

          To begin with, when I say I’m not a coder, I really mean it. It’s not false modesty. I taught myself this much over the course of a year and the reactivation of some very old skills (30 years hence). When I decided to do this, it wasn’t from any school of thought or design principle. I don’t know how CS professionals build things. The last time I looked at an IDE was Turbo Pascal. (Yes, I’m that many years old. I think it probably shows, what with the >> ?? !! ## all over the place. I stopped IT-ing when Pascal, Amiga and BBS were still the hot new things)

          What I do know is - what was the problem I was trying to solve?

          IF the following are true;

          1. I have ASD. If you tell me a thing, I assume your telling me a thing. I don’t assume you’re telling me one thing but mean something else.
          2. A LLM could “lie” to me, and I would believe it, because I’m not a subject matter expert on the thing (usually). Also see point 1.
          3. I want to believe it, because why would a tool say X but mean Y? See point 1.
          4. A LLM could lie to me in a way that is undetectable, because I have no idea what it’s reasoning over, how it’s reasoning over it. It’s literally a black box. I ask a Question—>MAGIC WIRES---->Answer.

          AND

          1. “The first principle is that you must not fool yourself and you are the easiest person to fool”

          THEN

          STOP.

          I’m fucked. This problem is unsolvable.

          Assuming LLMs are inherently hallucinatory within bounds (AFAIK, the current iterations all are), if there’s even a 1% chance that it will fuck me over (it has), then for my own sanity, I have to assume that such an outcome is a mathematical certainty. I cannot operate in this environment.

          PROBLEM: How do I interact with a system that is dangerously mimetic and dangerously opaque? What levers can I pull? Or do I just need to walk away?

          1. Unchangeable. Eat shit, BobbyLLM. Ok.
          2. I can do something about that…or at least, I can verify what’s being said, if the process isn’t too mentally taxing. Hmm. How?
          3. Fine, I want to believe it…but, do I have to believe it blindly? How about a defensive position - “Trust but verify”?. Hmm. How?
          4. Why does it HAVE to be opaque? If I build it, why do I have to hide the workings? I want to know how it works, breaks, and what it can do.

          Everything else flowed from those ideas. I actually came up with a design document (list of invariants). It’s about 1200 words or so, and unashamedly inspired by Asimov :)

          MoA / Llama-swap System

          System Invariants


          0. What an invariant is (binding)

          An invariant is a rule that:

          • Must always hold, regardless of refactor, feature, or model choice
          • Must not be violated temporarily, even internally. The system must not fuck me over silently.
          • Overrides convenience, performance, and cleverness.

          If a feature conflicts with an invariant, the feature is wrong. Do not add.


          1. Global system invariant rules:

          1.1 Determinism over cleverness

          • Given the same inputs and state, the system must behave predictably.

          • No component may:

            • infer hidden intent,
            • rely on emergent LLM behavior
            • or silently adapt across turns without explicit user action.

          1.2 Explicit beats implicit

          • Any influence on an answer must be inspectable and user-controllable.

          • This includes:

            • memory,
            • retrieval,
            • reasoning mode,
            • style transformation.

          If something affects the output, the user must be able to:

          • enable it,
          • disable it,
          • and see that it ran.

          Assume system is going to lie. Make its lies loud and obvious.


          On and on it drones LOL. I spent a good 4-5 months just revising a tighter and tighter series of constraints, so that 1) it would be less likely to break 2) if it did break, it do in a loud, obvious way.

          What you see on the repo is the best I could do, with what I had.

          I hope it’s something and I didn’t GIGO myself into stupid. But no promises :)

  • BaroqueInMind@piefed.social
    link
    fedilink
    English
    arrow-up
    79
    arrow-down
    1
    ·
    8 days ago

    I have no remarks, just really amused with your writing in your repo.

    Going to build a Docker and self host this shit you made and enjoy your hard work.

    Thank you for this!

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      29
      arrow-down
      2
      ·
      8 days ago

      Thank you <3

      Please let me know how it works…and enjoy the >>FR settings. If you’ve ever wanted to trolled by Bender (or a host of other 1990s / 2000s era memes), you’ll love it.

      • SuspciousCarrot78@lemmy.worldOP
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        7 days ago

        There are literally dozens of us. DOZENS!

        I’m on a potato, so I can’t attach it to something super sexy, like a 405B or a MoE.

        If you do, please report back.

        PS: You may see (in the docs) occasional references that slipped passed me to MoA. That doesn’t stand for Mixture of Agents. That stood for “Mixture of Assholes”. That’s always been my mental model for this.

        Or, in the language of my people, this was my basic design philosophy:

        YOU (question)-> ROUTER+DOCS (Ah shit, here we go again. I hate my life)

        |

        ROUTER+DOCS -> Asshole 1: Qwen (“I’m right”)

        |

        ROUTER+DOCS -> Asshole 2: Phi (“No, I’m right”)

        |

        ROUTER+DOCS -> Asshole 3: Nanbeige (“Idiots, I’m right!”)

        |

        ROUTER+DOCS (Jesus, WTF. I need booze now) <- (all assholes)

        |

        –> YOU (answer)

        (this could have been funnier in the ASCII actually worked but man…Lemmy borks that)

        EDIT: If you want to be boring about it, it’s more like this

        https://pastebin.com/gNe7bkwa

        PS: If you like it, let other people in other places know about it.

  • FrankLaskey@lemmy.ml
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    8 days ago

    This is very cool. Will dig into it a bit more later but do you have any data on how much it reduces hallucinations or mistakes? I’m sure that’s not easy to come by but figured I would ask. And would this prevent you from still using the built-in web search in OWUI to augment the context if desired?

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      8 days ago

      Comment removed by (auto-mod?) cause I said sexy bot. Weird.

      Restating again: On the stuff you use the pipeline/s on? About 85-90% in my tests. Just don’t GIGO (Garbage in, Garbage Out) your source docs…and don’t use a dumb LLM. That’s why I recommend Qwen3-4 2507 Instruct. It does what you tell it to (even the abilterated one I use).

      • 7toed@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        7 days ago

        abilterated one

        Please elaborate, that alone piqued my curiosity. Pardon me if I couldve searched

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          7 days ago

          Yes of course.

          Abliterated is a technical LLM term meaning “safety refusals removed”.

          Basically, abliteration removes the security theatre that gets baked into LLM like chatGPT.

          I don’t like my tools deciding for me what I can and cannot do with them.

          I decide.

          Anyway, the model I use has been modified with a newer, less lobotomy inducing version of abliteration (which previously was a risk).

          https://huggingface.co/DavidAU/Qwen3-4B-Hivemind-Instruct-NEO-MAX-Imatrix-GGUF/tree/main

          According to validation I’ve seen online (and of course, I tested it myself), it’s lost next to zero “IQ” and dropped refusals by about…90%.

          BEFORE: Initial refusals: 99/100

          AFTER: Refusals: 8/100 [lower is better], KL divergence: 0.02 (less than 1 is great, “0” is perfect.)

          In fact, in some domains it’s actually a touch smarter, because it doesn’t try to give you “perfect” model answers. Maths reasoning for example, where the answer is basically impossible, it will say “the answer is impossible. Here’s the nearest workable solution based on context” instead of getting stuck in a self-reinforcing loop, trying to please you, and then crashing.

          In theory, that means you could ask it for directions on how to cook Meth and it would tell you.

          I’m fairly certain the devs didn’t add the instructions for that in there, but if they did, the LLM won’t “sorry, I can’t tell you, Dave”.

          Bonus: with my harness over the top, you’d have an even better idea if it was full of shit (it probably would be, because, again, I’m pretty sure they don’t train LLM on Breaking Bad).

          Extra double bonus: If you fed it exact instructions for cooking meth, using the methods I outlined? It will tell you exactly how to cook Meth, 100% of the time.

          Say…you…uh…wanna cook some meth? :P

          PS: if you’re more of a visual learner, this might be a better explanation

          https://www.youtube.com/watch?v=gr5nl3P4nyM

          • 7toed@midwest.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 days ago

            Thank you again for your explainations. After being washed up with everything AI, I’m genuinely excited to set this up. I know what I’m doing today! I will surely be back

            • SuspciousCarrot78@lemmy.worldOP
              link
              fedilink
              arrow-up
              2
              ·
              7 days ago

              Please enjoy. Make sure you use >>FR mode at least once. You probably won’t like the seed quotes but maybe just maybe you might and I’ll be able to hear the “ha” from here.

  • WolfLink@sh.itjust.works
    link
    fedilink
    arrow-up
    21
    ·
    8 days ago

    I’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 days ago

      I haven’t tried this tool specifically, but I do on occasion ask both Gemini and ChatGPT’s search-connected models to cite sources when claiming stuff and it doesn’t seem to even slightly stop them bullshitting and claiming a source says something that it doesn’t.

      • SuspciousCarrot78@lemmy.worldOP
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        7 days ago

        Yeah, this is different. Try it. It gives you cryptogenic key to the source (which you must provide yourself: please be aware. GIGO).

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 days ago

          How does having a key solve anything? Its not that the source doesn’t exist, it’s that the source says something different to the LLM’s interpretation of it.

          • SuspciousCarrot78@lemmy.worldOP
            link
            fedilink
            arrow-up
            8
            ·
            7 days ago

            Yeah.

            The SHA isn’t there to make the model smarter. It’s there to make the source immutable and auditable.

            Having been burnt by LLMs (far too many times), I now start from a position of “fuck you, prove it”.

            The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

            If it does that more than twice, straight in the bin. I have zero chill any more.

            Secondly, drift detection. If someone edits or swaps a file later, the hash changes. That means yesterday’s answer can’t silently pretend it came from today’s document. I doubt my kids are going to sneak in and change the historical prices of 8 bit computers (well, the big one might…she’s dead keen on being a hacker) but I wanted to be sure no one and no-thing was fucking with me.

            Finally, you (or someone else) can re-run the same question against the same hashed inputs and see if the system behaves the same way.

            So: the hashes don’t fix hallucinations (I don’t even think that’s possible, even with magic). The hashes make it possible to audit the answer and spot why hallucinations might have happened.

            PS: You’re right that interpretation errors still exist. That’s why Mentats does the triple-pass and why the system clearly flags “missing / unsupported” instead of filling gaps. The SHA is there to make the pipeline inspectable, instead of “trust me, bro.”.

            Guess what? I don’t trust you. Prove it or GTFO.

            • skisnow@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              7 days ago

              The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

              Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting

              If it does that more than twice, straight in the bin. I have zero chill any more.

              That’s… not how any of this works…

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      7 days ago

      Fair point on setting expectations, but this isn’t just LLMs checking LLMs. The important parts are non-LLM constraints.

      The model never gets to “decide what’s true.” In KB mode it can only answer from attached files. Don’t feed it shit and it won’t say shit.

      In Mentats mode it can only answer from the Vault. If retrieval returns nothing, the system forces a refusal. That’s enforced by the router, not by another model.

      The triple-pass (thinker → critic → thinker) is just for internal consistency and formatting. The grounding, provenance, and refusal logic live outside the LLM.

      So yeah, no absolute guarantees (nothing in this space has those), but the failure mode is “I don’t know / not in my sources, get fucked” not “confidently invented gibberish.”

  • Angel Mountain@feddit.nl
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    8 days ago

    Super interesting build

    And if programming doesn’t pan out please start writing for a magazine, love your style (or was this written with your AI?)

      • Karkitoo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        8 days ago

        meat popsicle

        ( ͡° ͜ʖ ͡°)

        Anyway, the other person is right. Your writing style is great !

        I successfully read your whole post and even the README. Probably the random outbursts grabbed my attention back to te text.

        Anyway version 2, this Is a very cool idea ! I cannot wait to either :

        • incorporate it to my workflows
        • let it sit in a tab to never be touched ever again
        • tgeoryceaft, do tests and request features so much as to burnout

        Last but not least, thank you for not using github as your primary repo

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          8 days ago

          Hmm. One of those things is not like the other, one of those things just isn’t the same…

          About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC…

          …sorry, sorry…

          Anyway, enjoy. Don’t spam my Github inbox plz :)

          • Karkitoo@lemmy.ml
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            8 days ago

            Don’t spam my Github inbox plz

            I can spam your codeberg’s then ? :)

            About the random outburst: caused by TOO MUCH FUCKING CHATGPT WASTING HOURS OF MY FUCKING LIFE, LEADING ME DOWN BLIND ALLEYWAYS, YOU FUCKING PIEC… …sorry, sorry…

            Understandable, have a great day.

  • itkovian@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    3
    ·
    8 days ago

    Based AF. Can anyone more knowledgeable explain how it works? I am not able to understand.

      • itkovian@lemmy.world
        link
        fedilink
        arrow-up
        19
        arrow-down
        1
        ·
        8 days ago

        As I understand it, it corrects the output of LLMs. If so, how does it actually work?

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          27
          arrow-down
          3
          ·
          8 days ago

          Good question.

          It doesn’t “correct” the model after the fact. It controls what the model is allowed to see and use before it ever answers.

          There are basically three modes, each stricter than the last. The default is “serious mode” (governed by serious.py). Low temp, punishes chattiness and inventiveness, forces it to state context for whatever it says.

          Additionally, Vodka (made up of two sub-modules - “cut the crap” and “fast recall”) operate at all times. Cut the crap trims context so the model only sees a bounded, stable window. You can think of it like a rolling, summary of what’s been said. That summary is not LLM generated summary either - it’s concatenation (dumb text matching), so no made up vibes.

          Fast recall OTOH stores and recalls facts verbatim from disk, not from the model’s latent memory.

          It writes what you tell it to a text file and then when you ask about it, spits it back out verbatim ((!! / ??)

          And that’s the baseline

          In KB mode, you make the LLM answer based on the above settings + with reference to your docs ONLY (in the first instance).

          When you >>attach <kb>, the router gets stricter again. Now the model is instructed to answer only from the attached documents.

          Those docs can even get summarized via an internal prompt if you run >>summ new, so that extra details are stripped out and you are left with just baseline who-what-where-when-why-how.

          The SUMM_*.md file come SHA-256 provenance, so every claim can be traced back to a specific origin file (which gets moved to a subfolder)

          TL;DR: If the answer isn’t in the KB, it’s told to say so instead of guessing.

          Finally, Mentats mode (Vault / Qdrant). This is the “I am done with your shit" path.

          It’s all of the three above PLUS a counter-factual sweep.

          It runs ONLY on stuff you’ve promoted into the vault.

          What it does is it takes your question and forms in in a particular way so that all of the particulars must be answered in order for there to BE an answer. Any part missing or not in context? No soup for you!

          In step 1, it runs that past the thinker model. The answer is then passed onto a “critic” model (different llm). That model has the job of looking at the thinkers output and say “bullshit - what about xyz?”.

          It sends that back to the thinker…who then answers and provides final output. But if it CANNOT answer the critics questions (based on the stored info?). It will tell you. No soup for you, again!

          TL;DR:

          The “corrections” happen by routing and constraint. The model never gets the chance to hallucinate in the first place, because it literally isn’t shown anything it’s not allowed to use. Basic premise - trust but verify (and I’ve given you all the tools I could think of to do that).

          Does that explain it better? The repo has a FAQ but if I can explain anything more specifically or clearly, please let me know. I built this for people like you and me.

  • bilouba@jlai.lu
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    8 days ago

    Very impressive! Do you have benchmark to test the reliability? A paper would be awesome to contribute to the science.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      15
      arrow-down
      2
      ·
      8 days ago

      Just bush-league ones I did myself, that have no validation or normative values. Not that any of the LLM benchmarks seem to have those either LOL

      I’m open to ideas, time wiling. Believe it or not, I’m not a code monkey. I do this shit for fun to get away from my real job

      • bilouba@jlai.lu
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        8 days ago

        I understand, no idea on how to do it. I heard about SWE‑Bench‑Lite that seems to focus on real-world usage. Maybe try to contact “AI Explained” on YT, he’s the best IMO. Your solution might be novel or not but he might help you figuring that. If it is indeed novel, it might be worth it to share it with the larger community. Of course, I totally get that you might not want to do any of that. Thank you for your work!