• 𝕸𝖔𝖘𝖘@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 天前

    An LLM can’t “go rogue”. They’re all just toys that idiots are using for critical infrastructure functions, then they bitch when they burn themselves on the fire they’ve created in their lap.

  • IronKrill@lemmy.ca
    link
    fedilink
    English
    arrow-up
    52
    ·
    6 天前

    The AI agent was set to complete a routine task in the PocketOS staging environment. However, it came up against a barrier “and decided — entirely on its own initiative — to ‘fix’ the problem by deleting a Railway volume,” writes Crane, as he starts to describe the difficult-to-believe series of unfortunate events.

    Quite easy-to-believe, really.

    These multiple safeguards toppling in rapid succession

    Multiple safeguards? Really? Multiple paragraph prompts are not multiple safeguards… it’s half a safeguard at best. Applying limits on what the AI can do is a safeguard.

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    92
    ·
    7 天前

    This guy.

    The PocketOS boss puts greater blame on Railway’s architecture than on the deranged AI agent for the database’s irretrievable destruction. Briefly, the cloud provider’s API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.

    Oh look, they have project level tokens: https://docs.railway.com/integrations/api#project-token

    They chose to give it full account access, including to production. But ohhhh nooooo it’s not MYYYY fault!

      • Fmstrat@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        7 天前

        Oh yes, I skipped that part. Railway specifically explains their solutions are self-managed. If they were doing pgdumps to the same volume, that’s on them.

        If Railway loses business over this, they may have a libel claim. They’d never do it, but it wouldn’t be invalid.

        • el_abuelo@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          7 天前

          “It wouldn’t be invalid” isn’t the worst double negative in the world but it would be valid to say that it was unpleasant to read it when you could have used a less misdirecting choice of prose that wouldn’t have had such a negative effect on my reading comprehension. That is to say that I could have enjoyed it less but I certainly didnt enjoy it as much as i could have if you hadn’t used the double negative when a single positive wasn’t any further from reach.

      • Bilb!@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 天前

        That’s doesn’t even really qualify as a backup. A snapshot, maybe.

    • queueBenSis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 天前

      ha! for real. you have scoped API tokens, but not using it properly. this is just a fear mongering click bait rage bait headline. sure, the agent executed the deletion, but it’s the human’s responsibility to configure security tokens correctly before handing the keys to anyone, human or agent.

  • WhatsHerBucket@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    2
    ·
    7 天前

    “That’s ok, it will be great in robots with lethal weapons. What could go wrong? It’ll be the greatest killing machine, like you’ve never seen before”. 🫲 🍊 🫱

    • Napster153@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 天前

      Can we make sure to make Ted Farro suffers worse this time?

      Being reduced to a mutant blob for, say, a few extra thousand years and maybe put in a zoo or something?

      • Pman@lemmy.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 天前

        Nah but that’s what he wanted, he is the truest form of tech bro, destroy the world, refuse to accept consequences of his actions, weaseled his way out of the situation and managed to, in the wake of unimaginable human suffering, get more power over people and has a god complex tell me this isn’t some or all the characteristics of people like Peter Theil, Elon Musk, Mark Zuckerberg, Sundar Pichai, Bill Gates, hell even Tim Cook and Steve Jobs before him. Punishment doesn’t stop this sort of behavior but removing the possibility of someone having that level of control over others is the only way but the richest and most powerful have always sought ways of amassing more power not realizing that that leads to worse off situations for everyone including themselves, Horizon did great encapsulating that trait in Faro, but be it him, the people behind Skynet, the Matrix or whatever other tech dystopia that tech bros seem pathologically unable to not try to make happen in the worst way possible is only the beginning, they seem to forget that even with advanced tech that serves their needs and wants, which won’t help their mental health, the people lower down on the rungs of society have brains, wants and needs, and they have more expertise in all sorts of things than the 1% are except for mass exploitation. This inevitably goes wrong one of a few ways, either everyone dies from the tech, or so many that societal collapse is inevitable not great and even if society survives it can’t functionally reconstitute itself; 2 they win and kill off or supress enough of society that the society becomes less productive and instead of fighting the powerful they flee or don’t participate in wealth generating for the rich were they don’t have to, maybe to rise up again later or the economy of the region just ignores them completely and the government protects themselves from their people more than anything else, or 3rd your revolution with terror campaigns against any and all who can be credibly accused of being part of the former tyrants. In all 3 cases the richer people end up poorer overall because wealth flees or dies in autocracy.

  • fum@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    4
    ·
    7 天前

    This is absolutely hilarious. “AI” users getting what they deserve chef’s kiss

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      6 天前

      This is what happens when there is a new technology and companies are run by commerce grads, not scientist or engineers that understand the technology.

      • kazerniel@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        6 天前

        Please don’t recommend AI for therapeutic uses, it’s only been optimised to keep the user engaged and pushed many people into psychosis. Just search for “ai psychosis” on your favourite search engine and you’ll get a ton of reports on how LLMs validate vulnerable people’s delusions, sometimes pushing them all the way into murder and/or suicide.

      • Cherries@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        6 天前

        I hope you are not seriously advocating using the lying machine for therapy. You would get more value talking to a finger puppet.

      • Doom@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        6 天前

        No. Chatbots are machines built by billionaires with the agenda of making money. They litterally design these bots (even the therapeutic ones) to be sycophantic to the point they tell people anything to keep them chatting longer. To the point some of their users lose touch with reality. How many cases do we need of a chatbots helping a teenager plan and succeed at a suicide? Altruists did not design these machines. Even with a human therapist we have to watch for the landmines of their personal agendas. That’s a thousand times worse for machines that have no humanity, are capable of LIES, and have secret unwritten priorites written into their code by rich sociopathic creators. If facebook taught us anything it should be that if something is free on the internet it’s not because we are the customers.

        Also DO NOT TELL ALL YOUR DEEPEST DARKEST SECRETS TO CHATBOTS! They aren’t required by any legal bodies to protect that information! OMFG

  • SirEDCaLot@lemmy.today
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 天前

    There’s stupid from top to bottom here.

    The company is stupid for allowing an AI full root access to their entire setup.

    The provider is stupid for only generating full-access API keys. They’re even stupider for storing backups with a volume, so deleting the volume (zero confirmation via API key) also insta-deletes the backups. And they’re stupidest for encouraging users to plug AIs into this full-trust mess.

    And the company is absolute stupidest for having no backups other than the provider’s builtin versioning.

  • percent@infosec.pub
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    7 天前

    Seems like they were operating with a pile of bad practices, then threw AI into the mix.

    Neural networks are approximation algorithms. There’s a reason LLMs are generally more productive with statically typed languages, TDD, etc. They need those feedback loops and guard rails, or they’ll just carry on as if assuming they never make mistakes (which tends to have a compounding effect).

    If you want to use AI safely, you should be more defensive about it. It will fuck up; plan accordingly.

    • Kage520@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 天前

      There really should be a certification course for using AI safely. I’m slop coding a hobby app and I’m shocked at how much it FEELS like it can do, because it can do amazing things, yet fails in the strangest ways. When it feels like it can get away with it, it forgets earlier discussions and moves on without it. So you can spend time hammering out a whole section of code, then move on, and AI will rip out everything that references that code and think of a different way in the moment and code that in instead. It won’t be the same. It probably won’t work, or at least won’t pass all test cases. But if you aren’t paying attention and keep coding, your original part of the project is no longer functioning and you won’t understand why. But every step of the way it’s confident in its answers and you won’t suspect that it fundamentally no longer understands the project.

      • ExFed@programming.dev
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 天前

        As someone who started writing software over 20 years ago (yikes I feel old), I feel like a lot of the best practices I’ve come to appreciate are really just strategies for mitigating future pain or boring/uninspiring work. When you eliminate most of the cost of rewriting everything from scratch by a machine that feels nothing, then “best practices” kinda lose their meaning.

        Edit: confusing sentence order.

        • Rooster326@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 天前

          I feel like a lot of the best practices I’ve come to appreciate are really just strategies for mitigating future pain or boring/uninspiring work.

          And now you know the difference between Intelligence and Wisdom.

          Also everything has a cost. The only time something has no cost is when you decide your life, your time, is meaningless.

      • mark@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 天前

        yup and when you DO catch it spitting out nonsense. it"ll say “oh you right, let me change that”… 🙄 like, why do I have to tell you that you’re wrong about something? You should already know it’s wrong and fix it without me ever pointing it out.

        • Rooster326@programming.dev
          link
          fedilink
          English
          arrow-up
          16
          ·
          7 天前

          But it didn’t even understand it was wrong

          It can’t understand that. It can’t understand anything

          The Human-feedbaxk algorithm dictates humans prefer to receive an apology so it does.

        • SparroHawc@lemmy.zip
          link
          fedilink
          English
          arrow-up
          12
          ·
          7 天前

          That’s because it doesn’t really ‘know’ things in the same way you and I do. It’s much more like having a gut reaction to something and then spitting it out as truth; LLMs don’t really have the capability to ruminate about something. The one pass through their neural network is all they get unless it’s a ‘reasoning’ model that then has multiple passes as it generates an approximation of train-of-thought - but even then, its output is still a series of approximations.

          When its training data had something resembling corrections in it, the most likely text that came afterwards was ‘oh you’re right, let me fix that’ - so that’s what the LLM outputs. That’s all there is to it.

        • LePoisson@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 天前

          You already got the right replies from the other two. But I think your comment shows the danger of AI being talked about like it’s the fucking second coming.

          They’re all based on LLM - large language models

          They’re just modeling what “most likely” is the right response. AI doesn’t know shit and that’s why it also will yes and you to death because it really is just a yes and machine spitting out what is likely to appear as a valid response to a prompt.

          It’s very dangerous that people treat AI like it actually has some understanding of the training materials or true knowledge of anything. They’re just very good little parrots.

      • Rooster326@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        7 天前

        There is a course. It’s called experience. Common sense.

        All that any 4 hour YouTube/LinkedIn learning would-do would-be to perpetuate this idea that developers aren’t necessary. Take this course, buy these tokens and become A based God

  • LordCrom@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    7 天前

    This was the exact plot of Silicon Valley when Son of Anton deleted the entire codebase as the most efficient way to remove bugs.

  • realitista@lemmus.org
    link
    fedilink
    English
    arrow-up
    18
    ·
    6 天前

    Can you get an AI to code? Yes. Can you get it to stop you from running your operation in such a stupid way that it will end up destroying it? No.

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    7 天前

    Always keep offline backup copies of your important data regardless of using AI slop to look over it! No, I don’t care that “optical media is obsolete and e-waste!”, or that “tapes are a 100 year old obsolete technology compared to cheap SSDs from TEMU!”.

    • PolarKraken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      7 天前

      Optical media? Is that a viable part of backup strategies? I would expect tapes for sure, sounds like you know more than me.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        7 天前
        1. Better than not having an offline copy.
        2. Write-only, ransomware cannot delete/encrypt it.
        3. Drives are still cheap.

        Downside is having techbros talk you about laser rot, how internal drives are obstructing the optimal airflow in GAMING PC cases, and how Gabe Newell is based and stuff.

      • katze@lemmy.4d2.org
        link
        fedilink
        English
        arrow-up
        11
        ·
        7 天前

        A quality disc can last 10 years or more. At a company I used to work at the backups were burned to discs coated with gold. They had 15 year old discs that still worked.

        • PolarKraken@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 天前

          Dang that’s rad, had no idea (about it being used in such a way, I guess I mean, not too hard to imagine discs lasting that long).

          • lost_faith@lemmy.ca
            link
            fedilink
            English
            arrow-up
            6
            ·
            6 天前

            I have 20+ yr old optical media cdr/dvdr and they are still good, the cheap ones like Pine and the ones with no name at all

            • nwtreeoctopus@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              ·
              6 天前

              What is this 10 year thing? I’ve also got CD RWs and CD Rs from 1998 that still work. And DVD Rs from like 2002 that are still fine.

              • lost_faith@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 天前

                That was my point, hehe. I also never spent on the “quality name brands” of disks, $10 for 100 cds, deal! $15 for 100 dvds insert fry meme. Maybe we just “took care” of our media better than others did? Personally, they are in spindles on a bookshelf, I just made sure no direct sunlight would hit them where they are, some days get warm before I can turn on the ac.

                • nwtreeoctopus@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 天前

                  I definitely agree with you. I feel like I see people talking about optical media rotting all the time and it just doesn’t seem like a practical issue for 99% of use cases.

                  I seem to remember the conversation in the early 2000s being about how discs would rot in 50+ years and now I see people saying ten or 15.

  • Wispy2891@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    7 天前

    To me it seems more criminal that the cloud provider has a “nuclear button” feature via the API that destroys everything including the backups with a single call and no confirmation whatsoever. What if the key gets accidentally leaked and someone wants to have fun?