• TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    edit-2
    3 months ago

    Indeed. GPs have been doing this for a long time. It’s nothing new, and expecting every GP to know every single ailment that humanity has ever experienced, to recall it quickly, and immediately know the course of action to take, is unreasonable. They are only human.

    Like you say, if they’re blindly following a generic ChatGPT instance trained on whatever crap it’s scraped from the internet, then that’s bad.

    If they’re aiding their search using an LLM that has been trained on a good medical dataset, then taking that and looking more into it, then there’s no issue.

    People have become so reactionary to LLMs and other AI stuff. It seems there’s a “omg it’s so cool everybody should use it to the max. Let’s blindly trust it!” camp and a “it’s awful and shouldn’t exist, burn it all! No algorithms or machine learning anywhere. New tech is bad!”

    Both camps are just as stupid. There’s zero nuance in the discussion about this stuff, and it’s tiring.

    • FarceOfWill@infosec.pub
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      3 months ago

      You can build excellent expert systems that will definitely help a doctor remember all the illnesses, know what questions to ask to narrow things down or double check it’s not something weird, and provide options for treatment.

      These exist and are good

      Chatgpt isn’t an expert system and doctors using it like one need a serious warning from the BMC and would eventually need to be struck off, same as using ouija boards or bones to diagnose illnesses.

      • streetlights@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        These exist and are good

        Any examples off the top of your head? I would assume/speculate they are fairly expensive?

    • YungOnions@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 months ago

      People have become so reactionary to LLMs and other AI stuff. It seems there’s a “omg it’s so cool everybody should use it to the max. Let’s blindly trust it!” camp and a “it’s awful and shouldn’t exist, burn it all! No algorithms or machine learning anywhere. New tech is bad!”

      Both camps are just as stupid. There’s zero nuance in the discussion about this stuff, and it’s tiring.

      Well said.