The New York Times sued Chat GPT creator company Open AI and Microsoft citing copyright violations on 27 December. A lawsuit has been filed in the Federal District Court in Manhattan. The Times has claimed that Open AI was using millions of articles published by the media organization to train generative AI that would ultimately render the journalistic work of such organizations obsolete.
In the latest development in the Open AI saga, the company has embroiled itself again in yet another controversy, this time to do with copyright infringement. At the rapid pace of developments taking place in AI, the company has been stepping on a lot of shoes and generating hullabaloo. Just last month, the Silicon Valley drama unfurled around CEO Sam Altman, Open AI and Microsoft as a result of disagreements between employees who wanted to take a safety-first approach and those that wanted an unconstrained growth formula.
The current issue between the times and Open AI is also an extension of the same concern regarding the consequences of the rise of AI. Since, AI is still in its nascent stages, it is hard to predict its potential capabilities which is the cause of uncertainty and frustration amongst individuals and companies who fear they may be affected by it.
What is the copyright violation claim made by The Times?
A lawsuit filed in Federal District Court in Manhattan alleges that The New York Times’ millions of articles were utilized to train automated chatbots. These chatbots are now seen as competitors to the news outlet, serving as a source of reliable information.
The lawsuit does not specify an exact monetary reparation but it asserts that the defendants should be accountable for “billions of dollars in statutory and actual damages” resulting from the alleged unauthorized copying and utilization of The New York Times’ valuable content. Furthermore, the suit demands the destruction of any chatbot models and training data incorporating copyrighted material from The Times.
It was also reported that The Times had attempted to discuss the concerns of intellectual property theft with Open AI but were unable to arrive at a consensus.
Other cases of copyright violations
OpenAI has been facing multiple lawsuits from fiction writers regarding its use of copyrighted materials including an ongoing class-action lawsuit. Another case involves a separate AI firm being sued by the Getty photos over the unauthorized use of its images in September. It sued an A.I. company that generates images based on written prompts, claiming the unauthorized use of Getty’s copyrighted images.
Furthermore, in July, actress and comic Sarah Silverman joined lawsuits against Meta and OpenAI, alleging that her memoir was “ingested” as a training text for their A.I. programs. This new development has raised concerns among novelists when it was disclosed that A.I. systems had absorbed tens of thousands of books and has in turn lead to a lawsuit by authors such as Jonathan Franzen and John Grisham.
How are creators affected when their content is used to train AI?
With respect to The Times case, the complaint alleges that OpenAI and Microsoft are unfairly benefiting from The New York Times’ investments in journalism by using the newspaper’s content without payment. It further claims that the Open AI and Microsoft were creating products that act as substitutes for The Times, potentially diverting audiences and competing with the news outlet itself. Furthermore, there have been allegations by The Times that Microsoft’s Browse with Bing feature was submitting verbatim results from the NYT premium subscription feature without so much as attributing the source link to it.
Similarly other creators of content are affected when their material is used to train AI and bots that would be able to replicate their work and potentially leave them obsolete.
Microsoft’s statement from September
Microsoft has acknowledged potential copyright concerns related to its A.I. products and announced in September that it would cover legal costs for customers facing copyright complaints. In contrast, some individuals from the technology industry, including venture capital firm Andreessen Horowitz, have argued that exposing A.I. companies to copyright liability could severely impact their development.
What are AI hallucinations?
Another concern that has been raised within the same issue is that of AI hallucinations. This is when AI mistakenly attributes disinformation and misinformation to some source. The credibility of the source may take a hit if misinformation is linked to it. AI hallucinations happen when AI sees a pattern in a place where there is none, thus leading to the generation of inaccurate patterns.
The arguments made by NYT propel us to think about the consequences of unchecked development of generative AI. On the other hand, Open AI claims to be making an effort to create AI tools to assist journalists. At the turn of every new technological era, we see new fears arise about technology replacing man. What AI has in store for us is yet to be seen.
Comments 2