I’ve noticed an uptick in the number of pro-AI posts on this platform.

Various posts with titles similar to “When will people stop being afraid of AI” or “Can we please acknowledge AI was very needed for X

Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

  • GarboDog@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    8 小时前

    idk about being a straw-man, but regardless the reply was addressing the misleading and not giving proper credit to the researchers and further giving that LLMs were used for analysis, not full on finding the exploit, so no LLMs aren’t good at finding exploits without clear search inquires by humans.

    As for the empathy and the robo-sexuality- It was the intentional point of the original comment that people find heavy social relation towards LLMs or other objects that are able to communicate back to them. Even in our examples of the movies they touch on romantically/sexual relations towards robots and a couple others point towards the empathy of them as well. PS these are topics from 1950’s not “whatever the shit kids are in to these days.” Most people affected by this are older generations and young adults without social netting.

    Turning it around phrasing that LLMs are useful towards finding exploits makes it sound more like your wanting to use LLMs for using said exploits rather than using LLMs for better use cases. Regardless its still not possible nor ever will be because again LLMs can only use predetermined variables based on its previous learning data set and random variables (PS those random variables that are undesirable are what is commonly called hallucination, its just unwanted variables in a huge spaghetti code.) Its even on the site your sourced:

    “Was this AI-found? AI-assisted. The starting insight — that splice() hands page-cache pages into the crypto subsystem and that scatterlist page provenance might be an under-explored bug class — came from human research by Taeyang Lee.”

    If we misread your interpretation then our mistake, however the phrasing felt more that you were praising AI for finding exploits and not for actual good use and it read out to us like an ethical issue.

    If making this stance clear that LLMs make more harm than good in the case of chat Bots and being used as full on replacements of people makes us a Straw-man than IG we’re a straw-man or whatever lol.

    Though we can probably agree that Machine Learning can, should and have been used since the 1950s as glorified search and calculation engines for complex equations and datasets. They can make really good use for generating and categorizing random protein molecules, find patterns in cancer research and even filter out examples astronomers find in the night sky; however its overall useless without a qualified and passionate researcher who knows their stuff and can double check their ML sifters.

    Sources for the saucy beans:

    ^edit, fixed a bit of formatting lol^

    • mirshafie@europe.pub
      link
      fedilink
      arrow-up
      1
      ·
      4 小时前

      The strawman-building is that you’re extrapolating really, really far based on a tiny comment, and so you’re making wild assumptions that aren’t relevant to the conversation. The accusation that I’m hoping to be able to use LLMs to find bugs for nefarious reasons is far out. In fact, ironically, your text reads like something a badly (or maliciously) configured LLM would produce.

      I never claimed that somehow, unprompted, an LLM went out and found a bug. But LLMs are increasingly used as important tools in finding all kinds of problems in code. Going forward, as we get better at how to use these models, more bugs will likely be found. And if we can train other ML models on other kinds of data but with similar size, I think we’d be right to expect a lot.

      I have no doubt that misuse of LLMs and other machine learning models is widespread. The parapsychology aside, I’m worried about how it’s being used in war and targeting, which will only get worse.

      However I think it’s a bit disingenuous to portray LLMs as glorified search engines or autocorrect. It’s not wrong, it’s technically correct, but the utility is way beyond find-and-replace. It’s a bit like calling humans glorified tapeworms. Doesn’t really make for an interesting discussion.

      I also think you’re wrong in asserting that LLMs or other ML models can only be useful for researchers on the edge of their fields. I guess we’ll see.