- cross-posted to:
- android@lemdro.id
- cross-posted to:
- android@lemdro.id
cross-posted from: https://lazysoci.al/post/45756214
Every big tech company rebrands basic sentiment analysis as mental health support the moment it sounds marketable. Google dropping ‘mental health responses’ into Gemini sounds like a liability hedge more than a feature. Who actually greenlit treating an LLM as a mental health tool, and what happens when someone takes the advice seriously?
LLMs are terrible at crisis intervention because they hallucinate reassurance. A model telling someone it gets better without real clinical judgment is dangerous. Google should be mandating crisis hotline redirects, not tuning better conversational responses.
They should absolutely identify crisis and handoff the conversation to a human.
An ad company wants to be your therapist. The irony is not subtle. Gemini has an engagement problem and somebody decided the solution was to monetize mental health alongside the targeted advertising. You cannot harvest data from vulnerable people and call it wellness support.
Google fixing mental health responses in Gemini sounds like PR cleanup after their model told someone to die. AI models giving mental health advice remains fundamentally dangerous regardless of how they tune the safety filters.



