...Increasingly, people with mental health conditions are using large language models for support, even though researchers find A.I. chatbots can encourage delusional thinking or give shockingly bad advice. Surely some benefit. Harry said many of the right things. He recommended Sophie seek professional support and possibly medication; he suggested she make a list of emergency contacts; he advised her to limit access to items she might use to harm herself.
Harry didn't kill Sophie, but A.I. catered to Sophie's impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony. (A spokeswoman for OpenAI, the company that built ChatGPT, said it was developing automated tools to more effectively detect and respond to a user experiencing mental or emotional distress. "We care deeply about the safety and well-being of people who use our technology," she said.)
In December, two months before her death, Sophie broke her pact with Harry and told us she was suicidal, describing a riptide of dark feelings. Her first priority was reassuring her shocked family: "Mom and Dad, you don't have to worry."
Sophie represented her crisis as transitory; she said she was committed to living. ChatGPT helped her build a black box that made it harder for those around her to appreciate the severity of her distress. Because she had no history of mental illness, the presentable Sophie was plausible to her family, doctors and therapists...
https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-health-suicide.html?smid=em-share
No comments:
Post a Comment