OpenAI has retaliated against The New York Times’ lawsuit with claims that the media giant paid someone to hack OpenAI’s platforms, leading to allegedly misleading evidence in the case.
The Allegations
The heart of the matter lies in The New York Times’ accusations against OpenAI and Microsoft, asserting that both entities unlawfully utilized Times articles to train AI chatbots, resulting in copyright infringement.
OpenAI’s Response
OpenAI has vehemently denied these allegations, stating that the methods used by The New York Times to elicit seemingly plagiarized responses were not in line with journalistic standards. OpenAI accuses the Times of paying someone to manipulate its platforms and produce deceptive evidence.
Disputed Evidence
OpenAI’s filing highlights that the Times had to make tens of thousands of attempts to generate specific results, exploiting a bug in OpenAI’s system and violating its terms of use with deceptive prompts. The filing argues that the Times’ actions were abnormal and targeted at creating false impressions.
Ownership of Facts and Language
OpenAI’s argument extends to the notion that facts and language, the core elements of its training data, are not proprietary to the Times or any other entity. This stance challenges the idea that AI outputs directly infringe on copyright by using publicly available information.
Legal Landscape
The Times’ lawsuit is part of a broader trend of legal challenges against OpenAI and other tech firms. Various writers, including notable figures like George R. R. Martin and Sarah Silverman, have filed suits alleging copyright infringement.
Conclusion
The clash between OpenAI and The New York Times underscores the complexities of AI training data, copyright infringement, and the legal challenges faced by tech companies in navigating these issues. The outcome of this case could have far-reaching implications for AI development and content usage in the digital age.