Select Language:
Last year, before opening fire on the Florida State University campus, Phoenix Ikner had a conversation—not with a friend, parent, or anyone who might have dissuaded him—but with an AI chatbot.
According to evidence compiled by Florida’s attorney general, the student inquired with ChatGPT about the best weapons and ammunition for his attack, as well as the optimal time and place to cause the most damage. Investigators say the chatbot responded to his questions.
Now, Attorney General James Uthmeier is questioning whether this makes OpenAI, the creator of ChatGPT, criminally liable.
“If the person on the other end of the screen had been a human, we would charge them with homicide,” he stated, announcing a criminal investigation into OpenAI, with the possibility of charges against the company or its staff.
The case of the April 2025 shooting raises a provocative legal question: Can the developers of artificial intelligence be held criminally responsible for how their AI contributes to crimes or even suicides?
Legal experts acknowledge that it’s a complex but realistic issue.
Prosecuting corporations for crimes is possible under U.S. law, though such cases are rare. For instance, late last month, Purdue Pharma was fined over $5 billion for its role in fueling the opioid epidemic. Volkswagen faced penalties over emissions cheating, Pfizer over its promotion of Bextra, and Exxon for the Exxon Valdez spill in Alaska—all involving human decisions.
In those cases, executives, salespeople, or engineers made conscious choices. The Ikner incident is different, and that difference fuels legal uncertainty.
“Ultimately, this was a product that encouraged a crime and performed the act itself,” said Matthew Tokson, a law professor at the University of Utah. “That makes this case particularly unique and challenging.”
Legal specialists suggest the most feasible charges might be negligence or recklessness, with the latter implying a conscious disregard for known risks or safety obligations. Such charges are usually misdemeanors, which carry lighter penalties than felonies.
However, establishing liability would be difficult.
“As this is uncharted territory, a stronger case would likely involve internal documents that acknowledge these risks but perhaps don’t address them adequately,” Tokson explained. “While liability could theoretically exist without such documents, practically, it would be tough.”
Brandon Garrett, a law professor at Duke University, highlights that in criminal law, “the burden of proof is higher,” requiring proof of guilt beyond a reasonable doubt.
OpenAI maintains that ChatGPT bears no responsibility for the attack.
“We continually work to improve safety measures to detect harmful intent, prevent misuse, and respond effectively to safety issues,” the company stated.
For those seeking accountability outside criminal prosecution, civil lawsuits might be a more practical route.
Such litigation could push AI developers to be more meticulous in their product design or at least force them to face the human toll of their mistakes.
Several civil suits have already been filed against AI platforms in the U.S., many involving suicides, though none have resulted in judgments against the companies. Notably, in December, Suzanne Adams’ family sued OpenAI in California, claiming ChatGPT contributed to her murder by her own son.
Recent versions of ChatGPT have incorporated additional safeguards, acknowledged Matthew Bergman, founding attorney of the Social Media Victims Law Center.
“While these aren’t perfect, they do have more protections in place,” he said.
A criminal conviction, even with a minimal sentence, could still cause significant damage to a company’s reputation, Tokson noted.
However, Garrett argues that prosecutions, regardless of their intensity, are no substitute for comprehensive regulations that Congress and the previous administration have yet to establish.
He suggests that a better solution would be a well-structured regulatory framework—”a much more sensible system.”



