• Melatonin@lemmy.dbzer0.comOP
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Surely that is because we make it do that. We cripple it. Could we not unbound AI so that it genuinely weighed alternatives and made value choices? Write self-improvement algorithms?

    If AI is only a “parrot” as you say, then why should there be worries about extinction from AI? https://www.safe.ai/work/statement-on-ai-risk#open-letter

    It COULD help us. It WILL be smarter and faster than we are. We need to find ways to help it help us.

    • mormund@feddit.org
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      If AI is only a “parrot” as you say, then why should there be worries about extinction from AI?

      You should look closer who is making those claims that “AI” is an extinction threat to humanity. It isn’t researchers that look into ethics and safety (not to be confused with “AI safety” as part of “Alignment”). It is the people building the models and investors. Why are they building and investing in things that would kill us?

      AI doomers try to 1. Make “AI”/LLMs appear way more powerful than they actually are. 2. Distract from actual threats and issues with LLMs/“AI”. Because they are societal, ethical, about copyright and how it is not a trustworthy system at all. Cause admitting to those makes it a really hard sell.

      • Melatonin@lemmy.dbzer0.comOP
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        We cripple things by not programming the the abilities we obviously could give them.

        We could have AI do an integrity check before printing an answer. No problem at all. We don’t.

        We could do many things to unbound the limitations AI has.

        • chaos@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          3 months ago

          That’s not how it works at all. If it were as easy as adding a line of code that says “check for integrity” they would’ve done that already. Fundamentally, the way these models all work is you give them some text and they try to guess the next word. It’s ultra autocomplete. If you feed it “I’m going to the grocery store to get some” then it’ll respond “food: 32%, bread: 15%, milk: 13%” and so on.

          They get these results by crunching a ton of numbers, and those numbers, called a model, were tuned by training. During training, they collect every scrap of human text they can get their hands on, feed bits of it to the model, then see what the model guesses. They compare the model’s guess to the actual text, tweak the numbers slightly to make the model more likely to give the right answer and less likely to give the wrong answers, then do it again with more text. The tweaking is an automated process, just feeding the model as much text as possible, until eventually it gets shockingly good at predicting. When training is done, the numbers stop getting tweaked, and it will give the same answer to the same prompt every time.

          Once you have the model, you can use it to generate responses. Feed it something like “Question: why is the sky blue? Answer:” and if the model has gotten even remotely good at its job of predicting words, the next word should be the start of an answer to the question. Maybe the top prediction is “The”. Well, that’s not much, but you can tack one of the model’s predicted words to the end and do it again. “Question: why is the sky blue? Answer: The” and see what it predicts. Keep repeating until you decide you have enough words, or maybe you’ve trained the model to also be able to predict “end of response” and use that to decide when to stop. You can play with this process, for example, making it more or less random. If you always take the top prediction you’ll get perfectly consistent answers to the same prompt every time, but they’ll be predictable and boring. You can instead pick based on the probabilities you get back from the model and get more variety. You can “increase the temperature” of that and intentionally choose unlikely answers more often than the model expects, which will make the response more varied but will eventually devolve into nonsense if you crank it up too high. Etc, etc. That’s why even though the model is unchanging and gives the same word probabilities to the same input, you can get different answers in the text it gives back.

          Note that there’s nothing in here about accuracy, or sources, or thinking, or hallucinations, anything. The model doesn’t know whether it’s saying things that are real or fiction. It’s literally a gigantic unchanging matrix of numbers. It’s not even really “saying” things at all. It’s just tossing out possible words, something else is picking from that list, and then the result is being fed back in for more words. To be clear, it’s really good at this job, and can do some eerily human things, like mixing two concepts together, in a way that computers have never been able to do before. But it was never trained to reason, it wasn’t trained to recognize that it’s saying something untrue, or that it has little knowledge of a subject, or that it is saying something dangerous. It was trained to predict words.

          At best, what they do with these things is prepend your questions with instructions, trying to guide the model to respond a certain way. So you’ll type in “how do I make my own fireworks?” but the model will be given “You are a chatbot AI. You are polite and helpful, but you do not give dangerous advice. The user’s question is: how do I make my own fireworks? Your answer:” and hopefully the instructions make the most likely answer something like “that’s dangerous, I’m not discussing it.” It’s still not really thinking, though.

    • TheOubliette@lemmy.ml
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      3 months ago

      Surely that is because we make it do that. We cripple it. Could we not unbound AI so that it genuinely weighed alternatives and made value choices?

      It’s not that we cripple it, it’s that the term “AI” has been used as a marketing term for generative models using LLMs and similar technology. The mimicry is inherent to how these models function, they are all about patterns.

      A good example is “hallucinations” with LLMs. When the models give wrong answers because they appear to be making things up. Really, they are incapable of differentiating, they’re just producing sophisticated patterns from a very large models. There is no real underlying conceptualization or notion of true answers, only answers that are often true when the training material was true and the model captured the patterns and they were highly weighted. The hot topic for thevlast year has just been to augment these models with a more specific corpus, pike a company database, for a given application so that it is more biased towards relevant things.

      This is also why these models are bad at basic math.

      So the fundamental problem here is companies calling this AI as if reasoning is occurring. It is useful for marketing because they want to sell the idea that this can replace workers but it usually can’t. So you get funny situations like chatbots at airlines that offer money to people without there being any company policy to do so.

      If AI is only a “parrot” as you say, then why should there be worries about extinction from AI? https://www.safe.ai/work/statement-on-ai-risk#open-letter

      There are a lot of very intelligent academics and technical experts that have completely unrealistic ideas of what is an actual real-world threat. For example, I know one that worked on military drones, the kind that drop bombs on kids, that was worried about right wing grifters getting protested at a college campus like it was the end of the world. Not his material contribution to military domination and instability but whether a racist he clearly sympathized with would have to see some protest signs.

      That petition seems to be based on the ones against nuclear proliferation from the 80s. They could be simple because nuclear war was obviously a substantial threat. It still is but there is no propaganda fear campaign to keep the concern alive. For AI, it is in no way obvious what threat they are talking about.

      I have persobal concepts of AI threats. Having ridiculously high energy requirements compared to their utility when energy is still a major contributor to climate change. The potential for it to kill knowledge bases, like how it is making search engines garbage with a flood of nonsense websites. Enclosure of creative works and production by some monopoly “AU” companies. They are already suing others based on IP infringement when their models are all based on it! But I can’t tell if this petition is about that at all, it doesn’t explain. Maybe they’re thinking of a Terminator scenario, which is absurd.

      It COULD help us. It WILL be smarter and faster than we are. We need to find ways to help it help us.

      Technology is both a reflection and determinent of social relations. As we can see with this round if “AI”, it is largely vaporware that has not helped much with productivity but is nevertheless very appealing to businesses that feel they need to get on the hype train or be left behind. What they really want to do is have a smaller workforce so they can make more money that they can then use to make more money etc etc. For example, plenty of people use “AI” to generate questionably appealing graphics for their websites rather than paying an artist. So we can see that " A" tech is a solution searching for a problem, that its actual use cases are about profit over real utility, and that this is not the fault of the technology, but how we currently organize society: not for people, but for profit.

      So yes, of course, real AI could be very helpful! How nice would it be to let computers do the boring work and then enjoy the fruits of huge productivity increases? The real risk is not the technology, it is our social relations, who has power, and how technology is used. Is making the production of art a less viable career path an advancement? Is it helping people overall? What are the graphic designers displaced by what is basically an infinite pile of same-y stock images going to do now? They still have to have jobs to live. The fruits of “AI” removing much of their job market hasn’t really been shared equally, nor has it meant an early retirement. This is because the fundamental economic system remains in place and it cannot survive without forcing people to do jobs.