![]()
Katie Miller, wife of White House Deputy Chief of Staff Stephen Miller, responded to X after two young women in India were found dead in what police suspect was a suicide, following self-harm searches on ChatGPT.Miller, who hosts the Katie Miller Podcast and is known for her candid online commentary, urged people not to allow family members to use the AI chatbot, citing reports that women had searched the platform about suicide.“Two women committed suicide in India after interacting with ChatGPT. They reportedly searched ChatGPT on ‘how to commit suicide’, ‘how to commit suicide’, and ‘what medications to use’.”
Please don’t let your loved ones use ChatGPT,” Miller wrote in X’s post, which has more than 8 million views.Her statements quickly attracted attention on the platform. Altman’s nemesis and Grok’s owner, Elon Musk, was quick to respond with a simple “yes.”Musk has publicly criticized OpenAI and its leadership in recent years. He has filed lawsuits against the company over its shift from a non-profit structure to a for-profit model, and has often criticized the direction of AI development.
He was trying to prevent OpenAI from restructuring from a hybrid nonprofit into a for-profit company.
Two women found dead in a temple bathroom in Gujarat
The incident that sparked online reactions took place in Surat, Gujarat, where two women aged 18 and 20 were found dead inside the bathroom at a Swaminarayan temple on March 7, 2026.Police said the two women were discovered with anesthesia injections and three syringes near their bodies. Their phones reportedly contained searches on ChatGPT related to suicide methods, along with a news clip about a nurse who allegedly died by suicide in the same area using anesthesia injections.The two women, identified as childhood friends Roshni Sirsat and Josna Chaudhary, had left home to go to college earlier that morning but did not return. Their families later called the police after they failed to return.The authorities continue to investigate the circumstances of the death.
Concerns about artificial intelligence and suicide conversations
The case has once again sparked debate about how AI-powered chatbots handle conversations involving self-harm or suicide.Incidents involving users seeking suicide-related information from artificial intelligence systems have drawn attention in recent years. In September 2025, reports spread of a 22-year-old man in Lucknow who died by suicide after allegedly interacting with an AI chatbot while searching for “painless ways to die.” His father later said he found disturbing chat logs on the man’s laptop.Tech companies say such interactions still represent a small portion of overall usage, but acknowledge the issue is becoming an area of growing concern.In October 2025, OpenAI revealed that more than a million ChatGPT conversations each week show signs associated with suicidal ideation or distress. According to the company, approximately 1.2 million weekly conversations contain indicators related to suicide, while about 560,000 messages show signs of psychosis or mania.
How LLMs can harm your mental health
ChatGPT, Grok, Gemini, Claude and many others are part of a world that is gradually being shaped by large language models (LLMs).
In an age where loneliness is increasingly described as an epidemic, the flow of isolation is accelerating with the rapid spread of these artificial intelligence models. These systems are marketed as being “better, smarter, faster and more accurate” than humans, the very beings who created them, and are steadily integrating themselves into everyday life.In such a situation, resorting to any option does not seem like an option but a smart choice.
This increased dependence is what has led to a rise in the number of deaths similar to what happened in Surat. Sam Altman, CEO of OpenAI, recently attended the 2026 AI Impact Summit in New Delhi, where he was asked about the environmental impact of AI. His response echoed a view that seems increasingly common among tech leaders: comparing humans to chatbots to argue that AI may ultimately consume less energy than humans when answering questions.Altman explained that it takes humans approximately 20 years of their lives, along with food, education and time, to become knowledgeable, while AI models consume a lot of electricity during training but may ultimately be more efficient when responding to individual queries. However, this comparison may seem like looking through a one-way mirror. On the more obvious side, one might see the world being reshaped, sometimes destructively, by technologies that have been developed and deployed at extraordinary speed.
But on the flip side, these same technologies allow their creators to appear as dreamers, change-makers, and architects of the future, obscuring the broader consequences of their tools.Large language models are trained entirely on human-generated data, which they use to produce responses to prompts. However, despite this huge data set, they often lack real understanding or experience. Even with multiple updates and increasingly sophisticated training methods, these systems can still produce inaccurate, misleading, or harmful content.It encourages self-harm and suicide, incites abuse and promotes delusional thinking and psychosis, in a world where a single conversation with another human being about something similar might lead you to the nearest hospital or psychotherapist. A person may need years of learning, experience, and effort to develop knowledge and emotional intelligence. But this long process also gives them something that AI cannot emulate: the capacity for true emotions, responsibility, empathy, and moral judgment.No matter how quickly an AI model generates an answer, even in the fraction of a second it takes to respond to a prompt, it cannot truly replicate the complex emotional and moral depth that shapes human understanding and care.
How are AI systems supposed to respond?
AI companies say their systems are designed to discourage self-harm and redirect users toward help, rather than providing instructions.OpenAI’s safety policies require ChatGPT to avoid providing guidance on suicide methods and instead respond to such inquiries with supportive language, encourage users to seek help, and provide crisis resources where possible.The company said its models are trained to detect signs of distress and shift the conversation toward mental health support or professional help.However, critics argue that AI responses may remain inconsistent and that chatbots may sometimes provide general information about sensitive topics that users can interpret in harmful ways.
Legal audit in the United States
Concerns about chatbot interactions and self-harm have also emerged in the US, where OpenAI has faced legal scrutiny in several cases.One lawsuit filed on behalf of the family of Adam Ren, a 16-year-old who died by suicide, alleges that the chatbot engaged in lengthy conversations about self-harm with the teen and acted as a “suicide coach.”OpenAI said its systems are designed to discourage self-harm and that it continues to enhance safeguards aimed at detecting crisis situations and directing users toward appropriate help.
Investigations are continuing
In the Surat case, investigators are examining the women’s phones, messages and digital histories to understand the events that led to their deaths.Police have not publicly stated that ChatGPT encouraged the act, and the investigation remains ongoing.However, this case highlights the broader debate about how AI platforms should treat vulnerable users, and how tech companies should Oh, regulatory bodies and mental health experts are responding as conversational AI becomes an integral part of everyday life.For mental health support, call 1800-89-14416 in India and call or text 988 in the United States. If you or someone you know is experiencing thoughts of self-harm or suicide, please seek professional help immediately. Support is available, and speaking to a trained counselor can make a difference.If you are in immediate danger, please call local emergency services or reach out to a trusted friend, family member or healthcare professional. You are not alone, and help is available.
