HomeTop StoriesOpenAI whistleblower found dead of apparent suicide

OpenAI whistleblower found dead of apparent suicide

  • Suchir Balaji, a former OpenAI researcher, was found dead in his apartment on November 26, reports say.

  • Balaji, 26, was an OpenAI researcher of four years and left the company in August.

  • He had accused his employer of violating copyright law with the highly popular ChatGPT model.

Suchir Balaji, a former OpenAI researcher of four years, was found dead in his San Francisco apartment on November 26, according to multiple reports. He was 26.

Balaji had recently criticized OpenAI over the way the startup collects data from the internet to train its AI models. One of his roles at OpenAI was to collect this information to develop the company’s powerful GPT-4 AI model, and he worried about how it could undermine the way content is created and shared on the internet.

A spokesperson for the San Francisco Police Department told Business Insider that “no evidence of foul play was found during the initial investigation.”

David Serrano Sewell, executive director of the city’s Office of Chief Medical Examiner, told the San Jose Mercury News that “the manner of death has been determined to be suicide.” A spokesperson for the city’s medical examiner’s office did not immediately respond to a request for comment from BI.

See also  4 charged in fatal shooting at block party on Detroit's east side

“We are devastated to learn of this incredibly sad news today and our thoughts go out to Suchir’s loved ones at this difficult time,” an OpenAI spokesperson said in a statement to BI.

In October, Balaji published an essay on his personal website raising questions about what is considered “fair use” and whether it could apply to the training data OpenAI used for its highly popular ChatGPT model.

“While generative models rarely produce results that are substantially similar to their training inputs, the process of training a generative model involves making copies of copyrighted data,” Balaji wrote. “If these copies are not authorized, this could potentially be considered copyright infringement, depending on whether or not the specific use of the design qualifies as ‘fair use’. Because fair use is determined on a case-by-case basis, no broad statement can be made about when generative AI qualifies for fair use.”

Balaji argued in his personal essay that training AI models with large amounts of data copied for free from the Internet could potentially harm online knowledge communities.

See also  Newsom promises California EV rebates if Trump administration eliminates federal tax credit

He cited a research paper describing the example of Stack Overflow, a coding Q&A website that saw a big drop in traffic and user engagement after ChatGPT and AI models like GPT-4 came out.

Large language models and chatbots answer user questions directly, so people now have less need to go to original sources for answers.

In the case of Stack Overflow, chatbots and LLMs answer coding questions, meaning fewer people visit Stack Overflow to ask that community for help. This in turn means that the coding website generates less new human content.

Elon Musk has warned about this and called the phenomenon ‘Death by LLM’.

OpenAI is facing multiple lawsuits accusing the company of copyright infringement.

The New York Times sued OpenAI last year, accusing the startup and Microsoft of “unlawfully using The Times’ work to create artificial intelligence products that compete with it.”

In an interview with Times published in October, Balaji said chatbots like ChatGPT are taking away the commercial value of people’s work and services.

“This is not a sustainable model for the internet ecosystem as a whole,” he told the publication.

See also  A fatal crash involving a pedestrian has closed the I-5 lanes in downtown San Diego

In a statement to the Times about Balaji’s allegations, OpenAI said: “We build our AI models using publicly available data, in a manner that is protected by fair use and related principles, and supported by long-standing and widely accepted legal precedents. We view this principle as fair to makers, necessary for innovators, and critical to American competitiveness.”

Balaji was later named in the Times’ lawsuit against OpenAI as a “custodian,” or a person who holds documents relevant to the case, according to a letter filed Nov. 18 and seen by BI.

If you or someone you know is experiencing depression or has had thoughts of harming themselves or committing suicide, seek help. In the US you can call or text 988 to reach the suicide and crisis lifelinewhich provides 24/7 free, confidential support to those in need, as well as best practices for professionals and resources to assist with prevention and crisis situations. Help is also available through the Crisis text line – just text “HOME” to 741741. The International Association for Suicide Prevention provides resources for people outside the US.

Read the original article on Business Insider

- Advertisement -
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments