OpenAI is being sued by the child's family in the Canada school shooting.

OpenAI is being sued by the child's family in the Canada school shooting.

 

https://ichef.bbci.co.uk/news/800/cpsprodpb/b21a/live/b079eeb0-1c6d-11f1-8206-178b2777dfa1.jpg.webp
OpenAI is being sued by the child's family in the Canada school shooting.(Image: AFP/Getty Images) 

The family of a girl critically injured during a mass shooting at a Canadian school is suing ChatGPT-maker OpenAI, claiming it had been aware the suspect had been planning an attack but failed to alert the authorities.

 Maya Gebala, a 12-year-old, was shot in the neck and head on February 10 in Tumbler Ridge and is still in the hospital. Due to the nature of her conversations with the chatbot, an initial ChatGPT account associated with the suspect, 18-year-old Jesse Van Rootselaar, was banned by OpenAI in June 2025, but Canadian police were not notified. The BBC was informed by OpeanAI that the company was committed to making "meaningful changes" to assist in preventing future tragedies of a similar nature.


Read More:  OpenAI vows safety policy changes after Tumbler Ridge shooting


Eight people were killed in the attack, including five young children and the suspect's mother, in one of the deadliest shootings in Canadian history.

 The civil lawsuit, brought by Gebala's mother Cia Edmonds, alleges Rootselaar set up an account with ChatGPT before she turned 18 - something users can do with parental consent.

 The plaintiffs allege no age verification took place on the site.

 The suspect, according to the lawsuit, described "various scenarios involving gun violence" to the chatbot over several days in late spring or early summer 2025 and viewed it as a "trusted confidante." The lawsuit claims that twelve OpenAI employees then suggested that Canadian law enforcement be informed and flagged the posts as "indicating an imminent risk of serious harm to others." Instead, it is alleged the request to contact the authorities was "rebuffed" and the only action taken was to ban Rootselaar's account.

 OpenAI has previously said it did not alert police because the account did not meet its threshold of a credible or imminent plan for serious physical harm to others.

 Despite previous warnings from OpenAI systems, the suspect was able to create a second ChatGPT account and "continue planning scenarios involving gun violence." According to the lawsuit, the business "took no steps to act upon this knowledge" despite "specific knowledge of the shooter's long-range planning of a mass casualty event." The plaintiffs state as a result of the company's conduct, Gebala, who was shot at three times after trying to lock a library door to keep out the shooter, has suffered a "catastrophic brain injury".


OpenAI's response


In a statement to the BBC, an OpenAI spokesperson called the events an "unspeakable tragedy", adding its thoughts remained with the victims, their families and the community.
 "OpenAI remains committed to working with government and law enforcement officials to make meaningful changes that help prevent tragedies like this in the future," a spokesperson said.
 Sam Altman, OpenAI's chief executive officer, met Evan Solomon, Canada's minister of artificial intelligence, and David Eby, British Columbia's premier, virtually on March 4. The Wall Street Journal reports that Altman apologized to the Tumbler Ridge community and "pledged to strengthen protocols on notifying police over potentially harmful interactions." OpenAI's vice-president of global policy wrote an open letter to Canadian officials on February 26 and shared it with media outlets. In it, the company said it had made a number of changes in recent months, such as enlisting the assistance of "mental health and behavioral experts" to assess cases and making the criteria for referring people to the police "more flexible." OpenAI stated that it would have reported the suspect's ChatGPT account in accordance with the new guidelines. According to the company's statement, "We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest risk offenders." OpenAI stated that it would also set up a direct line of communication with Canadian law enforcement in order to quickly identify any potential future cases that had the "potential for real world violence." Canada's AI minister Evan Solomon said on 27 February that while legislators saw a willingness by the tech firm to improve its protocols, "we have not yet seen a detailed plan for how these commitments will be implemented in practice".


Source: BBC


Post a Comment

0 Comments