• CileTheSane@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    4 days ago

    I’d rather use Reddit than AI, yes.

    If someone says something incorrect on Reddit there’s a good chance there’s someone pointing it out. AI will insist it is correct when it tells you “strawberry” has 2 “R’s”.

    • yucandu@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      4 days ago

      If someone says something incorrect on Reddit there’s a good chance there’s someone pointing it out.

      There’s a very small chance of someone pointing it out. There’s a better chance they’ll be downvoted. There’s an even better chance someone right will be downvoted and someone pointing out their mistake, incorrectly, will be upvoted.

      You can’t be serious if you’re telling me you’re going to use Reddit comments as a reliable source of information, but then ideologically object to the idea of using an LLM for the same purpose.

      AI will insist it is correct when it tells you “strawberry” has 2 “R’s”.

      Have you used AI in the past year?

      • CileTheSane@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        You can’t be serious if you’re telling me you’re going to use Reddit comments as a reliable source of information, but then ideologically object to the idea of using an LLM for the same purpose.

        I’m saying Reddit is more reliable than AI. I agree with you that you shouldn’t just trust Reddit as a reliable source of information, I just trust AI much less.

        Have you used AI in the past year?

        Yes yes, someone has hard-coded a fix for the strawberry thing. It’s still an excellent example of the root issue:

        1. it was a thing everybody knew was incorrect and they could see how AI dealt with it: guessing, and then insisting it made no mistakes.
          If I can’t trust it for basic information I can double check myself then why the fuck would I trust it for information I can’t verify myself?

        2. everytime something like this comes up it gets “fixed”, sure. Someone hard codes a correct answer to the specific question that everyone can easily see is incorrect. Why the fuck would I assume that’s happening for some obscure thing that I don’t immediately know is incorrect?
          Sure, it’s probably not telling people to put glue on pizza anymore, because everyone who reads that knows it’s a bad idea. How do I know it’s not suggesting something equally stupid when I ask it how to rewire a thermostat, something that the majority of people won’t immediately clock as “that will burn your house down”?

        LLMs are really good at sounding smart to people who don’t know when it is very wrong.