Copyright law bars a former competitor of Thomson Reuters from using the company’s content to create an artificial intelligence-based legal platform, a court has ruled in a decision that could lay the foundation for similar rulings over the legality of using copyrighted works to train AI systems.
U.S. District Judge Stephanos Bibas on Tuesday rejected arguments from Ross Intelligence that it’s protected by the “fair use” exception to copyright protections. The court’s ruling on the novel issue will likely be cited by creators suing tech companies across Hollywood, though the case doesn’t involve the creation of new content created by AI systems.
“Originality is central to copyright,” Bibas wrote.
Shortly after the court issued the ruling, Concord Music Group moved for a federal judge overseeing its lawsuit against Amazon-owned Anthropic over the use song lyrics to train Claude to consider the order in its evaluation of the case.
The case revolves around Thomson Reuters legal research platform in which users pay to access information regarding case law, state and federal statutes, law journals and regulations. Content includes headnotes that summarize key points of law and case holdings, which are copyrighted.
Ross, a now-defunct AI company backed by venture firm Y Combinator, used a form of those headnotes to train a competing legal search engine after Thomson Reuters declined to license the content. The key difference between this case and other AI lawsuits is that there was an intermediary that repurposed the copyrighted work for AI training. In lawsuits against OpenAI, Meta and Anthropic, among others, creators allege wholesale copying of material.
In Tuesday’s ruling, Bibas found that Ross may have infringed on more than 2,200 headnotes. To decide damages, a jury will determine whether any of Thomson Reuters’ copyrights have expired.
The court’s decision turned, in part, on whether the headnotes constitute original works protected by intellectual property law. Bibas, who ruled in favor of Ross Intelligence on summary judgment in 2023 in a decision that was withdrawn shortly before trial, sided with Thomson Reuters on the issue since headnotes can “introduce creativity by distilling, synthesizing, or explaining part of an opinion.”
“More than that, each headnote is an individual, copyrightable work,” the judge wrote. “That became clear to me once I analogized the lawyer’s editorial judgment to that of a sculptor. A block of raw marble, like a judicial opinion, is not copyrightable. Yet a sculptor creates a sculpture by choosing what to cut away and what to leave in place. That sculpture is copyrightable.”
Also of note: Bibas declining to find fair use, which provides protection for the utilization of copyrighted material to make another work as long as it’s “transformative.” On this issue, he noted that Ross intended to profit off of its use of Thomson Reuters headnotes, which “disfavors fair use.” The court stressed, “Even taking all facts in favor of Ross, it meant to compete with Westlaw by developing a market substitute. And it does not matter whether Thomson Reuters has used the data to train its own legal search tools; the effect on a potential market for AI training data is enough.”
The court pointed several times to the Supreme Court’s decision in Andy Warhol Foundation for the Visual Arts v. Goldsmith, which effectively reined in fair use. In that case, the majority said that an analysis of whether an allegedly infringing work was sufficiently transformed must be balanced against the “commercial nature of the use.” Creators are leveraging that ruling to argue that AI companies could’ve simply licensed the copyrighted material and that the markets for their works were undermined.
Randy McCarthy, an intellectual property lawyer at Hall Estill, says that the court’s ruling will be “heralded by existing groups of artists and content creators as the key to their case against the other generative AI systems.” He adds, “One thing is clear, merely using copyrighted material as training data to an AI cannot be said to be fair use per se.”
Whether the legal doctrine applies remains among the primary battlegrounds for the mainstream adoption of AI.
Read the original article here