Written by Ryan Patrick Jones and Bhargav Acharya

TORONTO — OpenAI said Thursday it will establish a direct point of contact with Canadian law enforcement and improve detection of repeat violators of its “violent activity” policy to bolster safety protocols in the wake of a recent school shooting.
The maker of ChatGPT detailed the steps in a letter to the Canadian minister responsible for artificial intelligence, Evan Solomon.
Anne O’Leary, vice president of global policy at OpenAI, wrote the letter after Canadian ministers this week urged the ChatGPT maker to quickly strengthen its safety protocols and warned that Ottawa would force change through legislation if the company did not.
“We remain committed to cooperating with law enforcement in the investigation of the Tumbler Ridge tragedy, and we are committed to continued partnership with the federal and provincial governments,” O’Leary said, referring to the town in British Columbia where the shooting occurred.
Ottawa summoned OpenAI’s safety team for talks this week after the company said it did not contact police about an account it blocked that belonged to the alleged shooter, Jesse van Rotselaar.
Van Rotselaar, 18, is suspected of killing eight people on February 10 before killing herself at Tumbler Ridge. OpenAI said it banned its ChatGPT account last year due to policy violations.
The company said the account had been flagged by systems that identify “misuse of our models to promote violent activity” but did not provide further details. OpenAI said the issues did not meet its internal standards for reporting to law enforcement.
O’Leary said Thursday that under the company’s “enhanced law enforcement referral protocol,” it would have referred the initial June account ban to police if it had been discovered now.
She also said the company discovered that Van Rotselaar used a second account, which it shared with law enforcement.
“We are committed to strengthening our detection systems to better prevent attempts to evade our safeguards and to prioritize identifying the most dangerous criminals,” O’Leary said.
The company has also committed to periodically evaluating the limits used by its automated systems to identify potentially violent activities by users.
Crime experts note that while more scrutiny of artificial intelligence and social media platforms is needed, police or other authorities may have missed additional opportunities to avoid one of Canada’s worst mass killings.
Police said Van Rotselaar had a history of mental health problems, and they removed the weapons from her home and then returned them to her.
Minister Solomon’s office did not immediately respond to a request for comment.
This article was generated from an automated news feed without any modifications to the text.

