• AbsolutelyNotCats@lemdro.id
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    Every big tech company rebrands basic sentiment analysis as mental health support the moment it sounds marketable. Google dropping ‘mental health responses’ into Gemini sounds like a liability hedge more than a feature. Who actually greenlit treating an LLM as a mental health tool, and what happens when someone takes the advice seriously?

  • AbsolutelyNotCats@lemdro.id
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 days ago

    LLMs are terrible at crisis intervention because they hallucinate reassurance. A model telling someone it gets better without real clinical judgment is dangerous. Google should be mandating crisis hotline redirects, not tuning better conversational responses.

  • AbsolutelyNotCats@lemdro.id
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 days ago

    An ad company wants to be your therapist. The irony is not subtle. Gemini has an engagement problem and somebody decided the solution was to monetize mental health alongside the targeted advertising. You cannot harvest data from vulnerable people and call it wellness support.

  • AbsolutelyNotCats@lemdro.id
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 days ago

    Google fixing mental health responses in Gemini sounds like PR cleanup after their model told someone to die. AI models giving mental health advice remains fundamentally dangerous regardless of how they tune the safety filters.