OpenAI says NY Times ‘hacked’ ChatGPT to build copyright lawsuit, generate misleading evidence

New York Times headquarters, ChatGPT logo and Open AI CEO Sam Altman
New York Times headquarters, ChatGPT logo and Open AI CEO Sam Altman

OpenAI has asked a federal judge to dismiss parts of the New York Times’ copyright lawsuit against it, arguing that the newspaper “hacked” its chatbot ChatGPT and other artificial intelligence systems to generate misleading evidence for the case.

OpenAI said in a filing in Manhattan federal court Monday that the Times caused the technology to reproduce its material through “deceptive prompts that blatantly violate OpenAI’s terms of use.”

“The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards,” OpenAI said. “The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI’s products.”

“The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards,” OpenAI said. REUTERS
“The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards,” OpenAI said. REUTERS

OpenAI did not name the “hired gun” whom it said the Times used to manipulate its systems and did not accuse the newspaper of breaking any anti-hacking laws.

“What OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’s copyrighted work,” the newspaper’s attorney Ian Crosby said in a statement on Tuesday.

Representatives for OpenAI did not immediately respond to requests for comment on the filing.

The Times sued OpenAI and its largest financial backer, Microsoft, in December, accusing them of using millions of its articles without permission to train chatbots to provide information to users.

The Times is among several copyright owners that have sued tech companies over the alleged misuse of their work in AI training, including groups of authors, visual artists and music publishers.

Tech companies have said their AI systems make fair use of copyrighted material and that the lawsuits threaten the growth of the potential multitrillion-dollar industry.

Courts have not yet addressed the key question of whether AI training qualifies as fair use under copyright law. So far, judges have dismissed some infringement claims over the output of generative AI systems based on a lack of evidence that AI-created content resembles copyrighted works.

The New York Times’ complaint cited several instances in which OpenAI and Microsoft chatbots gave users near-verbatim excerpts of its articles when prompted. It accused OpenAI and Microsoft of trying to “free-ride on the Times’s massive investment in its journalism” and create a substitute for the newspaper.

The Times sued Sam Altman’s OpenAI and its largest financial backer, Microsoft, in December, accusing them of using millions of its articles without permission to train chatbots to provide information to users. AFP via Getty Images
The Times sued Sam Altman’s OpenAI and its largest financial backer, Microsoft, in December, accusing them of using millions of its articles without permission to train chatbots to provide information to users. AFP via Getty Images
The complaint accused OpenAI and Microsoft of trying to “free-ride on the Times’s massive investment in its journalism” and create a substitute for the newspaper. Christopher Sadowski
The complaint accused OpenAI and Microsoft of trying to “free-ride on the Times’s massive investment in its journalism” and create a substitute for the newspaper. Christopher Sadowski

OpenAI said in its filing that it took the Times “tens of thousands of attempts to generate the highly anomalous results.”

“In the ordinary course, one cannot use ChatGPT to serve up Times articles at will,” OpenAI said.

OpenAI’s filing also said it and other AI companies would eventually win their cases based on the fair-use question.

“The Times cannot prevent AI models from acquiring knowledge about facts, any more than another news organization can prevent the Times itself from re-reporting stories it had no role in investigating,” OpenAI said.