![]()
A criminal investigation has been launched into whether an artificial intelligence (AI)-powered chatbot played a role in the deadly mass shooting at Florida State University.Florida Attorney General James Othmeyer said his office is investigating whether ChatGPT provided instructions to the alleged gunman, Phoenix Eckner, before the attack on April 17, 2025.
The horrific accident led to the death of two people and the injury of 7 others.“ChatGPT provided important advice to the shooter before he committed such heinous crimes,” Ottmeyer said at a news conference in Tampa.According to the prosecutor, the chatbot provided detailed answers about weapons and planning. He suggests what type of gun and ammunition to use, what works best at close range, and even points out crowded areas on campus, he said.“The prosecutors looked into this, and they told me that if it was someone on the other end of the screen, we would charge them with murder,” Ottmeier said.The shooting occurred outside the student union on the Tallahassee University campus. Ekner, a student, used his stepmother’s gun to shoot, killing two people and wounding six others before police shot him. The victims were identified as Robert Morales, 57, and Tero Chapa, 45, both of whom worked as vendors on the campus.
Eckner was seriously injured but survived. Authorities say the motive remains unclear, and there is no indication he knew his victims. He now faces charges including first-degree murder and attempted murder.As part of the investigation, the Attorney General’s Office issued subpoenas to OpenAI, the company behind ChatGPT. Officials want to know how the system works, how it is trained, and how it deals with users who might want to harm others.
They have also requested details about the company’s employees and any public statements related to the incident.Legal experts say the case may be difficult to pursue. Nima Rahmani, a former prosecutor, said it would be difficult to prove liability when it comes to the AI system. “It’s unusual, and… [Utmeier] “It is venturing into uncharted legal waters,” Rahmani said.Rahmani said that even if violations were proven, any penalty would likely be financial rather than criminal.
“At the end of the day, you can’t put a company in jail anyway, so you’re talking about a fine,” he said.OpenAI said in a statement that it was cooperating with investigators and rejected allegations of wrongdoing. A spokesperson said the chatbot’s responses were “reactive responses to questions containing information that can be found widely across public sources on the Internet. It did not encourage or promote any illegal or harmful activity.”The company added that the shooting was a tragedy but insisted that the chatbot was not responsible. It also said it identified an account associated with the suspect and shared the information with law enforcement, while continuing to improve safeguards to detect malicious intent.
