with the way AI is getting by the week,it just might be a reality

  • tacosanonymous@lemm.ee
    link
    fedilink
    arrow-up
    29
    arrow-down
    4
    ·
    1 year ago

    I think I’d stick to not judging them but if it was in place of actual socialization, I’d like to get them help.

    I don’t see it as a reality. We don’t have AI. We have language learning programs that are hovering around mediocre.

      • jeffw@lemmy.world
        link
        fedilink
        arrow-up
        21
        arrow-down
        2
        ·
        1 year ago

        If you’re that crippled by social anxiety, you need help, not isolation with a robot.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        1 year ago

        Then get professional help if you can’t improve on your own.

        Social skills aren’t innate and some people take longer than others to get them.

        Getting help is a lot less embarrassing than living your whole life without social skills. Maybe that’s a shrink, maybe that’s a day program for people with autism, maybe it’s just hanging out with other introverts. But itll only get better if you want to put the effort in. If you don’t put effort in, don’t be surprised when nothing changes.

    • cheese_greater@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I don’t see it as any more problematic than falling in a YouTube/Wikipedia/Reddit rabbit hole. As long as you don’t really believe its capital-S-Sentient, I don’t see an issue. I would prefer people with social difficulties practice on ChatGPT and pay attention to the dialectical back and forth and take lessons away from that to the real world and their interaction(s) withit

    • novibe@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      1 year ago

      That is really unscientific. There is a lot of research on LLMs showing they have emergent intelligent features. They have internal models of the world etc.

      And there is nothing to indicate that what we do is not “transforming” in some way. Our minds might be indistinguishable from what we are building towards with AI currently.

      And that will likely make more of us start realising that the brain and the mind are not consciousness. We’ll build intelligences, with minds, but without consciousnesses.